OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: EMOTE


Has anyone besides Rex and myself reviewed the 
EMOTE paper (Chi, Costa, Zhao, Badler)?  The 
work is very good and points out issues of animation 
based on movement qualities that *reveal* inner 
psychological conditions.  Therefore, these are 
rendered, but I note that there is a difference 
between a language that classifies inner states 
and one that tries to render it using intermediate 
high level parameters such as Shape and Effort.

But this is highly revealing and studying this 
may offer us some insights into the roles of the 
HumanML products/schemas/taxonomies.  Some thoughts:

1.  It is an existence proof that a high level language 
can be used to control the "qualitative aspects 
of movements"

2.  That the parameters chosen in a well thought 
through system created by a subject matter expert, 
in this case a choreographer, work well for the 
domain, in this case movement.  So our toolkit 
approach is viable but may have to be mappable. 
LMA is based on observation.  We have said that 
HumanML kit constructs applied correctly would 
enable observers to capture instances of shared 
semantic definitions, eg, psychological observation.

3.  That even with such a language, the expressiveness 
is only realized in the implementation of algorithms 
appropriate to the presentation domain (essentially, 
3D representations of arms and torso) and that part 
of the test of the applicability of such a language 
is in proving that such algorithms can be devised or 
found.  In the case of EMOTE, the assertion is that 
our single intensity parameter would be inadequate 
for the EMOTE algorithms (or that is how I read it).

4.  That adopting such parameters requires us to 
determine if they are expressive enough for other 
applications of HumanML or if they are only required 
for animation of 3D characters.  In other words, 
should we adopt the EMOTE language requirements or 
say this is a middleware issue?  EMOTE would work 
well for H-anim.  Would it work for SVG?  Is it 
only appropriate for animation and how can one 
prove the assertion in the paper that LMA analysis 
and then EMOTE codes can actually reflect inner 
psychological conditions?  Should HumanML only 
capture a description of the conditions and rely 
on implementations to communicate these to the 
EMOTE middleware which then communicates to 
H-anim the requirements for key frame adjustments, etc 
to get the right "Shape and Effort"?

Len 
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC