[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Subject: RE: Of interest to the HM group.
It depends on our boundaries and that is something we are trying to work out. I probably over stated the "implementational" statement. Our project directors are desperate to get ready for the first meeting. I have been hoping scenario exploration would help. It has for me, at least. As far as VHML being a HumanML language, I think the contributions from this group can help enormously just as the work on EMOTE. There are always overlaps and these so-called ecotones are very important to identify. Clean boundaries don't usually exist. However, if we consider VHML and EMOTE systems as HumanML consumers, that might help us. We really need some input back from potential HumanML consumers. We have categories for things such as gestures, emotions, etc. We briefly discussed implementation approaches to these (eg, scene graph routing). That's ok by me. Rex probably wants something that helps him get the documents ready. What would really be helpful from Andrew is to look at our draft schema fragments and suggest how they can be improved to work with his system. Because an avatar HLAL is only one application being considered, we can then debate that overlap. That VHML works with VRML systems is a different topic. I asked and I assume you are because we are VRMLies. On the other hand, as with any HLAL, it isn't necessary. H-anim being a VRML spec, it helps the interface issues I would assume, but I didn't see anything in VHML that would prevent that. Andrew might be able to tell us. On the other hand, when the EMOTE folks looked at our work, they said a single gesture intensity attribute is inadequate for their system. That kind of feedback (if it were explained better) would give us something to compare to VHML. Len http://www.mp3.com/LenBullard Ekam sat.h, Vipraah bahudhaa vadanti. Daamyata. Datta. Dayadhvam.h -----Original Message----- From: Niclas Olofsson [mailto:gurun@acc.umu.se] Sent: Friday, September 07, 2001 3:29 PM To: raytrace@smtp.cs.curtin.edu.au Cc: humanmarkup-comment@lists.oasis-open.org Subject: Re: Of interest to the HM group. Welcome Andrew, VHML is indeed interesting for many of us in HumanML. Some of us have our background in VRML and hanim. Reading the spec of VHML makes me curious just how tangent some of your efforts are with respect to mpeg-4. Noticed you have at least one guy in your project that knows mgepg(-7?). hanim has a close relationship to mpeg and seem to match some of your efforts too. IMHO if VHML would really help if some of the markup would be decoupled from the rendering. <smile> for example could prehaps explain the intention of the element (smile), but not really explain exactly how it should be performed. As an example (probably bad) if I wanted to model cromango humans I'm not sure that showing your teeth would be the expected rendering/behavior of the <smile> element. This worries me. Len has been complaining that we are getting to much into implementational stuff lately. As far as I can tell, it's really only one thread and it was in respect to SW as far as I can tell. But with respect to systems like VHML I fear that he might be right. Speaking only for myself I see HumanML on at least two levels in VHML. First as a system jacking in somewhere between the media parsing and expression interpretation. I think a HumanML profile would be possible to use to better interpreter how to parse and generate the correct expression that is going to be communicated. The second place I see HumanML profiles is to control how an expression will be rendered (see above). Implementational as I am I had a pretty clear picture of where HumanML would do most good. Now I see at least two good places, and they are on different layers. Hmm, need to go back to the drawing board again.
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Powered by eList eXpress LLC