OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: RE: [humanmarkup-comment] HMU.newmedia: Emotion


No, I don't think you are contradicting the original models, and I
think the notion of testing is important.

This brings up the notion of a standard methodology for obtaining
requirements from application areas, testing the schemata to see how
what we have fills the requirements. I know I'm drifting afield of
the specific topic of emotions. I think we may want to test our
schemata to see if we can produce a viable process model for each set
of requirements that are brought in.

I'm just thinking out loud here. Does this sound reasonable?

Following up: Emotions are processed in a number of different ways
over time, both as a single emotional response to a specific
stimulus, and how that response is viewed or reviewed later, or how
it is viewed in context, say, if it is embedded as a tagged anecdote
in a report, when a summary of a set of similar reports is digested,
containing a number of instances of such tagged anecdotes. Do we need
to accommodate weights of collective emotional responses?

Ciao,
Rex

At 10:26 AM -0600 11/5/01, Bullard, Claude L (Len) wrote:
>I am aware.  That is in the codelists.  These are
>organized under abstract types to enable event
>routing to proceed by application type.  So far
>I don't think I am contradicting the original
>models using an S/R paradigm.
>
>However, as we continue to layer in more systems
>with diverse requirements, we are not necessarily
>adding new requirements.  We are testing the old
>ones.   We should be creating
>use cases to determine when the schema needs new
>types.  For example, markup that organizes a
>process is applicable only to applications that
>view and process events as inputs, outputs,
>controls and mechanisms.  To simulate chronemic
>properties, we have to add time, and so forth.
>In VRML, this is easy.  In HTML, it's a little
>less easy (HTML + Time, a subset of SMIL2).
>
>So take a use case and see if it can be first
>organized by the current schema, then ask what else
>is needed if we can't.  Try what the author suggests
>for the process model. 
>
>I think first we discover that without
>a basic, easy to use process model, we don't have a
>means to show a single human object emoting in a
>much less two objects with a context of communication. 
>One could easily do that with a HumanML aware VRML
>object as long as it has a proto with a script
>node for dispatching events to the VRML nodes
>upon receipt of an event from an XML processor
>through the SAI/EAI. 
>
>Take a simple case:  a sphere that color cycles
>given HumanML selects and external events.
>
>len
>
>-----Original Message-----
>From: Rex Brooks [mailto:rexb@starbourne.com]
>
>I have to study this further, but, just a reminder: we are building a
>markup language that needs to account for use in scene graphs, and
>plain text, and messaging, and voiceXML, if possible, as well as a
>host of other applications. How we embed, or encode emotion is one of
>our tasks.
>
>It is important to keep the requirements of the applications/uses in
>mind as we develop the tags, the schemata, the document models, the
>scene graph models, the streaming models/protocols, the adaptations
>of artificial intelligence which use chaos and game theory, the
>psychological models which are informed by that set of schools of
>thought, and of course, informing us overall, semiotics.


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC