OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Fw: [humanmarkup] Digest Number 170


(Again, yesterday's posts forwarded fromYahooGroups ....a bit fewer messages
on the 'old' board today...I think we are on the right track!)

Ranjeeth Kumar Thunga
rkthunga@humanmarkup.org
(646) 456-9076


----- Original Message -----
From: <humanmarkup@yahoogroups.com>
To: <humanmarkup@yahoogroups.com>
Sent: Friday, August 24, 2001 6:49 AM
Subject: [humanmarkup] Digest Number 170


------------------------ Yahoo! Groups Sponsor ---------------------~-->
FREE COLLEGE MONEY
CLICK HERE to search
600,000 scholarships!
http://us.click.yahoo.com/zoU8wD/4m7CAA/ySSFAA/2U_rlB/TM
---------------------------------------------------------------------~->

To unsubscribe send an email to:
humanmarkup-unsubscribe@yahoogroups.com


------------------------------------------------------------------------

There are 5 messages in this issue.

Topics in this digest:

      1. Initial Phase 1 Documents
           From: "Ranjeeth Kumar Thunga" <rkthunga@humanmarkup.org>
      2. Re: [h-anim] HumanML Thoughts (repost)
           From: allbeck@gradient.cis.upenn.edu
      3. RE: Re: [h-anim] HumanML Thoughts
           From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
      4. RE: Re: [h-anim] HumanML Thoughts
           From: Rex Brooks <rexb@starbourne.com>
      5. RE: Re: [h-anim] HumanML Thoughts
           From: "Bullard, Claude L (Len)" <clbullar@ingr.com>


________________________________________________________________________
________________________________________________________________________

Message: 1
   Date: Thu, 23 Aug 2001 07:17:28 -0400
   From: "Ranjeeth Kumar Thunga" <rkthunga@humanmarkup.org>
Subject: Initial Phase 1 Documents

The most important thing is to ensure that we are all on the same page (or
in this case, the same Discussion List).

Hopefully with that being clear (;-)), we'll center on developing the
following deliverables that we should get to Working Draft status by
September 17th, 2001 (our first meeting).

Much work has already been done in the YahooGroups in these areas.  At this
stage, we will be formalizing the documents and ideas previously created, as
well as incorporating new ones.  These will set the tone for the TC, and
incorporates Manos suggested project progression.

Timetable:  Now until September 17th, 2001

    1) Domain:  Taxonomies to include (what is part of HumanML, what isn't)
    2) Applications:  What are the major application areas of HumanML
    3) Requirements:  What are the design features of HumanML (based on
Applications)
    4) New Deliverable Schedule:  (includes currently listed deliverables
[1] as well as additional applications, alternative classifications,
cross-correlation efforts)

If this sounds reasonable set of "pre-meeting" deliverables, then we can
start organizing our ideas in this fashion...So then, what *is*
HumanMarkup...?[2]



<manos>
1) Study current non XML popular classification systems (from DDC[1] to
NAICS[2] to "Classification of Living Things"[4] (interesting, includes
humans) to whatever we would like to focus on by building modules). This
would provide us with a jumpstart on any topic we might want to get our
hands in. It will also help transitioning existing knowledge bases.

2) HumanML will be used mostly as metadata (let's not argue on the term
context now ;-). I think it is wise to formally import existing
namespaces (e.g. the Dublin Core [4]) to provide interoperability. Of
course this import can be hidden or even extended. For example, we can
define properties of type dc:creator with more specific context. This
can be done via rdfs:subPropertyOf [5] (thanks Sean! Will anyone believe
I had the absurd idea to use XML names for this functionality...
Sheesh). You got to love those OOP concepts in RDF.

3) Define requirements. Per module probably.

4) Develop our own classifications. Different people will head towards
different ways on this, but it is expected and wanted.

5) Start working on real applications. This sector is much dependent on
the above three.
</manos>

[1] http://www.oasis-open.org/committees/humanmarkup
[2] http://groups.yahoo.com/group/humanmarkup


-----
Ranjeeth Kumar Thunga
HumanMarkup Chair
rkthunga@humanmarkup.org
(646) 456-9076





________________________________________________________________________
________________________________________________________________________

Message: 2
   Date: Thu, 23 Aug 2001 13:01:32 -0000
   From: allbeck@gradient.cis.upenn.edu
Subject: Re: [h-anim] HumanML Thoughts (repost)

Date: Wed, 22 Aug 2001 10:52:14 EDT
From: Norm Badler <badler@central.cis.upenn.edu>

One parameter for gesture intensity is insufficient for a realistic
spread of
human gesture performance.  See D. Chi, M. Costa, L. Zhao, and N.
Badler: "The
EMOTE model for Effort and Shape," ACM SIGGRAPH '00, New Orleans, LA,
July,
2000, pp. 173-182
(http://www.cis.upenn.edu/~badler/siggraph00/emote.pdf)

Norm


>  Hi Everyone,
>
>  Actually, HumanML will benefit most from a spare, very
functionally
>  practical h-anim 2001 spec that incorporates a way to let the
current
>  segmented facial structure work for a continuous mesh face by
either
>  allowing the segments to map to an envelope of vertices or
translate
>  directly to a set of displacers for the vertices that correspond
to
>  the segments, also an envelope. (For my purposes, I would prefer
that
>  the muscles of the face be modeled within the segment structure so
>  that percentages of contractions for each muscle could be
specified
>  for any given expression.)
>
>  That, as with the continuous mesh, requires a good weighting
engine
>  for the influence of overlapping envelopes or shared vertices. The
>  reason for focusing on facial gestures/expressions is that
>  talking-head windows are the most likely initial applications that
>  people in business or messaging systems (as opposed to using an
SMIL
>  video component or otherwise synced video signal) will want to
adopt
>  for animated human agents, or sales rep/CRM bots. If animation
>  encapsulation, or that particular part of it, could be advanced,
that
>  would be nice.
>
>  Secondarily, as noted previously by Cindy I believe, UMEL needs to
be
>  involved in setting up a library of gestural behaviors which can
take
>  0 -1 decimal values for intensities. These behaviors probably
should
>  not be collected until after the 2001 spec is in its final
approval
>  phases, although it would be helpful to work on them as we go
along.
>  I will certainly be doing that as I can.
>
>  ( I have asked Mark Callow how he rated Max 4 for its upgraded
>  Character Studio, and he tells me that their upgraded bones
(segments
>  for our purposes) system is definitely worth the upgrade, so I
will
>  be doing that for my work in this area sometime in the month of
>  October. My plate is just too full with getting the class
hierarchy
>  of HumanML (my area of specialty) in shape and handling
organization
>  duties as designated secretary.)
>
>  The continous mesh for h-anim was the development I was waiting
for
>  in order to move forward. In that meantime HumanML came along and
>  promises to give us the engine to drive X3D and handle all the
>  web-based object-swapping work at a level above the nuts and bolts
we
>  work with.
>
>    Now behaviors can be developed that won't be helplessly outdated
by
>  a continous mesh spec. We already have the attachment points, and
>  modeling cloth has come a long way in the meantime, as has
bandwidth
>  and processor power, so clothing, accessories, and avatar-wearable
>  gadgets are in the offing. 64-bit is in the pipeline now, and that
is
>  the penultimate piece of the puzzle for a workable cyberspace as
WE
>  see it. Whopptee-bleeping-dooo!
>
>  Ciao,
>  Rex
>
>    At 10:03 AM +0100 8/22/01, James Smith wrote:
>  >Hi all,
>  >
>  >It seems to me that the thing that would be most useful to the
HumanML
>  >people, as far as H-Anim is concerned, is the specification of the
>  >animation encapsulation method. Was any progress made on this at
the
>  >SIGGRAPH meeting?
>  >
>  >Apologies for not being able to attend, by the way...
>  >
>  >cheers,
>  >--
>  >James Smith - Vapour Technology - james@vapourtech.com
>  >www: http://www.vapourtech.com/          ICQ: 53372434
>  >PGP: B42F 9ACD D166 018C 57F8 8698 098E FAA2 C03B C9ED
>  >======================================================
>  >"There's no explaining the things that might happen;
>  >  there's now a new home for technology in fashion."
>  >         - The Clint Boon Experience
>  >======================================================
>  >
>  >
>  >
>  >------------------------------------------------------------------
>  >To remove yourself from the h-anim list, mail the command
>  >`unsubscribe h-anim' to Majordomo@web3d.org
>
>
>  --
>  Rex Brooks
>  GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
>  W3Address: http://www.starbourne.com
>  Email: rexb@starbourne.com
>  Tel: 510-849-2309
>  Fax: By Request
>
>
>  ------------------------------------------------------------------
>  To remove yourself from the h-anim list, mail the command
>  `unsubscribe h-anim' to Majordomo@web3d.org




________________________________________________________________________
________________________________________________________________________

Message: 3
   Date: Thu, 23 Aug 2001 08:22:48 -0500
   From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
Subject: RE: Re: [h-anim] HumanML Thoughts

I'm pulling Carol from the reply list because we are probably
bumming her out with design discussion.

Rex or Niclas:  the UPenn work has never been publicly discussed
on the list.  If you have studied it, can you provide a summary
of the what Laban Movement Analysis and the EMOTE engine do and
how they work.   It may be that these are implementations of the
kind of middleware that Cindy states is HumanML's to do, but it
is also likely that these are implementation solutions people
can use but are not necessarily useful for the spec other than
to show the spec can be used by them.  In other words, they
are systems that can consume HumanML but don't define it.
Without more details, it's hard to tell.

Len
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Niclas Olofsson [mailto:gurun@acc.umu.se]
Sent: Wednesday, August 22, 2001 6:58 PM
To: humanmarkup@yahoogroups.com
Cc: James Smith; carol.geyer@oasis-open.org
Subject: Re: [humanmarkup] Re: [h-anim] HumanML Thoughts


<?xml version="1.0" ?>

Rex Brooks wrote:
> Nic, the time is shortly to arrive for just that exploration. There
> is a thread with NormBadler at UPenn's Human Simulation Group with
> Matt Beitler et al,  that I will eventually get into some kind of
> presentable form for both HumanML and H-Anim--using Laban Movement
> Analysis and Badler's EMOTE engine that fills the bill as far as I
> can see right now.

Very interesting, but nope, that alone will not do it. A very good
starting point though. Looking at EMOTE it appears to me as yet another
level of abstraction that perhaps would make things easier. It can
perhaps provide a level of abstraction above FAP's and provide H-Anim
(or whatever human animation format) with a somewhat more dynamic
presentation. In the same time it provides authors and computers with a
more fuzzy means of communications.

But (a big BUT), in regular software design terms, most of this stuff
belongs in the outermost presentation layer. EMOTE gets close to filling
in as the presentation logic (backed up by h-anim representation). I'm
looking for the layer beneath it, the business logic of human
communication. Does it make any sense? Probably not. But I do collective
design. The system we are building right now took me since january to
design, but we build the core in only 3 weeks. I think this will work
pretty much the same, only it will take a couple of years instead. If
this where ready for prime time I'd be the first to start a task force
around it. But it isn't. It will take years. And I'll be there then.
Waiting. After all, this is what MY life is all about. I'm 30 today. I
have time :-)


________________________________________________________________________
________________________________________________________________________

Message: 4
   Date: Thu, 23 Aug 2001 07:05:36 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: RE: Re: [h-anim] HumanML Thoughts

Good work. Right. It is part of the middleware. I'll try to enlist
Jan to help. I thought this had moved to OASIS already? I just added
it.

At 8:22 AM -0500 8/23/01, Bullard, Claude L (Len) wrote:
>I'm pulling Carol from the reply list because we are probably
>bumming her out with design discussion.
>
>Rex or Niclas:  the UPenn work has never been publicly discussed
>on the list.  If you have studied it, can you provide a summary
>of the what Laban Movement Analysis and the EMOTE engine do and
>how they work.   It may be that these are implementations of the
>kind of middleware that Cindy states is HumanML's to do, but it
>is also likely that these are implementation solutions people
>can use but are not necessarily useful for the spec other than
>to show the spec can be used by them.  In other words, they
>are systems that can consume HumanML but don't define it.
>Without more details, it's hard to tell.
>
>Len
>http://www.mp3.com/LenBullard
>
>Ekam sat.h, Vipraah bahudhaa vadanti.
>Daamyata. Datta. Dayadhvam.h
>
>
>-----Original Message-----
>From: Niclas Olofsson [mailto:gurun@acc.umu.se]
>Sent: Wednesday, August 22, 2001 6:58 PM
>To: humanmarkup@yahoogroups.com
>Cc: James Smith; carol.geyer@oasis-open.org
>Subject: Re: [humanmarkup] Re: [h-anim] HumanML Thoughts
>
>
><?xml version="1.0" ?>
>
>Rex Brooks wrote:
>>  Nic, the time is shortly to arrive for just that exploration. There
>>  is a thread with NormBadler at UPenn's Human Simulation Group with
>>  Matt Beitler et al,  that I will eventually get into some kind of
>>  presentable form for both HumanML and H-Anim--using Laban Movement
>>  Analysis and Badler's EMOTE engine that fills the bill as far as I
>>  can see right now.
>
>Very interesting, but nope, that alone will not do it. A very good
>starting point though. Looking at EMOTE it appears to me as yet another
>level of abstraction that perhaps would make things easier. It can
>perhaps provide a level of abstraction above FAP's and provide H-Anim
>(or whatever human animation format) with a somewhat more dynamic
>presentation. In the same time it provides authors and computers with a
>more fuzzy means of communications.
>
>But (a big BUT), in regular software design terms, most of this stuff
>belongs in the outermost presentation layer. EMOTE gets close to filling
>in as the presentation logic (backed up by h-anim representation). I'm
>looking for the layer beneath it, the business logic of human
>communication. Does it make any sense? Probably not. But I do collective
>design. The system we are building right now took me since january to
>design, but we build the core in only 3 weeks. I think this will work
>pretty much the same, only it will take a couple of years instead. If
>this where ready for prime time I'd be the first to start a task force
>around it. But it isn't. It will take years. And I'll be there then.
>Waiting. After all, this is what MY life is all about. I'm 30 today. I
>have time :-)
>
>
>To unsubscribe send an email to:
>humanmarkup-unsubscribe@yahoogroups.com
>
>
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 5
   Date: Thu, 23 Aug 2001 09:31:03 -0500
   From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
Subject: RE: Re: [h-anim] HumanML Thoughts

Working my way up from the bottom of the mail,
I just saw Norm's comment on the need to look
at gestural intensity and expand it.  That
is the kind of feedback that is most immediately
useful.  I'll read the paper and reply.

We can evaluate any system and see what it
has to offer.  I could fill in the schema
immediately with all of the types developed
for public safety and add oh, a few hundred
elements.  But that is using HumanML as a
means to productize and even if it slows
us down, we should be careful to ensure
we are spec'ing repurposable datasets.
This is a difficult balancing act of course;
too abstract and we get ineefficient garbage
bag design; too specific and we can't repurpose.
So when proposing data types, we have to ask
where they do or don't add to the job of
describing humans and human communications.

Classification is an art form.  There are
techniques and rules of thumb to guide
but often they are just rulesOfDaToolz.
It takes a bit of intuition to work out
the rest.  That is why things like AI
tended to fall apart in the crunch.

I just subscribed to the OASIS list.
But we are in motion. :-)

Len
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Rex Brooks [mailto:rexb@starbourne.com]
Sent: Thursday, August 23, 2001 9:06 AM
To: humanmarkup@yahoogroups.com;
humanmarkup-comment@lists.oasis-open.org
Cc: James Smith
Subject: RE: [humanmarkup] Re: [h-anim] HumanML Thoughts


Good work. Right. It is part of the middleware. I'll try to enlist
Jan to help. I thought this had moved to OASIS already? I just added
it.


________________________________________________________________________
________________________________________________________________________



Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/






[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC