OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: RE: HM.VR_AI: Goals and Overview : HumanML_VR_AI Facilitator


Title: RE: HM.VR_AI: Goals and Overview : HumanML_VR_AI Facil
These do add another level of interpretation.   The issue is that this approach requires secondary code lists.  
You would see this as enumerated values.   There are mapping issues.
 
1.  There is a language issue.  Codes will be hard to keep up between multiple languages with inexact semantics.
 
In all cases, context is key.  Smile width = 53% is fine as long as the local measurements and the local "mouth"
can smile "a little" and we can determine a shape for "a little".   That is rendering and it can be done.  It is a little
harder to provide a code range of interpretations.  IOW, if the smile is "just a little", is that "wry smile", a
"that isn't very funny but you must have intended funny" and so on.   Five inches simply means you have a
measurement module in efffect for mapping measurement codes to some precise rendering form.   One
should be able to change the code list to suit the application.   Thus, <smile width="53%" /> is acceptable
but the receiving system has to be able to figure out given a certain mouth, what 53% is rendered as.
 
2.  There is a mapping task.   How to render ecstacy in for example, both bodily gesture and vocal intonation..
 
EMOTE maps emotive expressions to a compound shape.  That is the uniqueness of the Laban code lists.
They have commented that intensity factors are inadequate and although it was requested on the list, I have
not seen a reply that explains why they need this.  On the other hand, how would LABAN shape codes help
us if the rendering is audio.   In this case, another set of codes are required that describe the loudness, 
speed of delivery, ADSR for certain words, and other  values for the expression.  One would expect these
codes to be provided by the renderer.
 
len
 
-----Original Message-----
From: Ranjeeth Kumar Thunga [mailto:rkthunga@humanmarkup.org]
Sent: Monday, October 08, 2001 3:31 PM
To: OASIS Comment
Subject: Re: HM.VR_AI: Goals and Overview : HumanML_VR_AI Facilitator

BTW, thanks very much Rob for this proposal document.  This can help us piece together lots of the information we've been discussing related to VR and AI topics.
 
What I feel as an important litmus test to 'where HumanML' belongs amongst other processing and rendering languages is that the element/attribute values are to human readable.  
 
For example, <happy level=".434"/> would not be acceptable, whereas <happy level="ecstacy"> might be.  Or <smile width="53%"/> may not be acceptable, but <smile width="5in"/> would be, or perhaps <smile width="wide"/>.
 
This would very importantly apply to proxemics and chromemics.
 
This ensures that HumanML values are in fact, what we as humans use, matches its stated purpose of not being an interface on the 'human' layer, not layers below.   Regardless, mappings between machine sensible markup, and human sensible markup, could be formally and explicitly described in code, or either custom or standard XSLT.
 
 
Ranjeeth Kumar Thunga
 
----- Original Message -----
From: Rex Brooks
Sent: Friday, October 05, 2001 7:02 PM
Subject: RE: HM.VR_AI: Goals and Overview : HumanML_VR_AI Facilitator

Sorry, part of that was my contribution and the two parts of sentence didn't marry up completely. the structured approach refers to our use of xml and rdf schemata in our primary work, and our ongoing investigation of EMOTE work and Project Oz and Perlin's work, and now Rob's work in AI such that they tell us what requirements they need and we see if we can accommodate it or at least not create outright contradictory vocabularies. These bridges are the middleware which will use HumanML to do work with the lowerlevel languages like X3D/h-anim with and apart from MPEG-4 and SMIL and SOAP and OIL.

Is that a little more clear?

Ciao,
Rex

At 2:56 PM -0500 10/5/01, Bullard, Claude L (Len) wrote:
I don't understand the following. Please clarify:
 
"To be prepared for this, we are establishing a structured approach to provide for making HumanML readily useful for applications that bridge the "real" and "virtual" worlds can experience a potential imbalance in available attributes, and it may be necessary to provide a mechanism to adjust the mapping of attributes from the virtual to the real. "
 
Also note that both chronemics and proxemics have scale issues.    Personal real time and historical time have their analogues in spatial dimensions which include
both personal space and geographic space.  In the case of proxemics, I anticipate the use of concepts and data objects from the Geometry Modeling Language
for position-dependent services involving geometric space.
 
len
 


-- 
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC