OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Fw: [humanmarkup] Digest Number 169


As Joe suggested, I will forward YahooGroups Digests over to the OASIS list
as needed (till we fully transition over)...enjoy.

-----
Ranjeeth Kumar Thunga
HumanMarkup Chair
rkthunga@humanmarkup.org
(646) 456-9076

----- Original Message -----
From: <humanmarkup@yahoogroups.com>
To: <humanmarkup@yahoogroups.com>
Sent: Thursday, August 23, 2001 6:32 AM
Subject: [humanmarkup] Digest Number 169


------------------------ Yahoo! Groups Sponsor ---------------------~-->
Get VeriSign's FREE GUIDE: "Securing Your Web Site for Business." Learn
about using SSL for serious online security. Click Here!
http://us.click.yahoo.com/KYe3qC/I56CAA/yigFAA/2U_rlB/TM
---------------------------------------------------------------------~->

To unsubscribe send an email to:
humanmarkup-unsubscribe@yahoogroups.com


------------------------------------------------------------------------

There are 18 messages in this issue.

Topics in this digest:

      1. Re: [h-anim] HumanML Thoughts
           From: Rex Brooks <rexb@starbourne.com>
      2. Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)
           From: "Manos Batsis" <m.batsis@bsnet.gr>
      3. Re: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)
           From: Rex Brooks <rexb@starbourne.com>
      4. RE: Classification (WAS: RE: Re: [h-anim] HumanML T houghts)
           From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
      5. Re: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)
           From: clayton cottingham <drfrog@smartt.com>
      6. RE: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)
           From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
      7. RE: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)
           From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
      8. Re: [h-anim] HumanML Thoughts
           From: Niclas Olofsson <gurun@acc.umu.se>
      9. Re: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)
           From: Niclas Olofsson <gurun@acc.umu.se>
     10. RE: Re: [h-anim] HumanML Thoughts
           From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
     11. RE: Re: [h-anim] HumanML Thoughts
           From: Rex Brooks <rexb@starbourne.com>
     12. Re: [h-anim] HumanML Thoughts
           From: Rex Brooks <rexb@starbourne.com>
     13. Re: Re: [h-anim] HumanML Thoughts
           From: Niclas Olofsson <gurun@acc.umu.se>
     14. Lists
           From: "Ranjeeth Kumar Thunga" <rkthunga@humanmarkup.org>
     15. Re: Re: [h-anim] HumanML Thoughts
           From: Rex Brooks <rexb@starbourne.com>
     16. Re: Re: [h-anim] HumanML Thoughts
           From: "Ranjeeth Kumar Thunga" <rkthunga@interposting.com>
     17. Re: Re: [h-anim] HumanML Thoughts
           From: Rex Brooks <rexb@starbourne.com>
     18. Slashdot: Human Markup Language
           From: clayton <drfrog@smartt.com>


________________________________________________________________________
________________________________________________________________________

Message: 1
   Date: Wed, 22 Aug 2001 07:22:12 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: Re: [h-anim] HumanML Thoughts

Hi Everyone,

Actually, HumanML will benefit most from a spare, very functionally
practical h-anim 2001 spec that incorporates a way to let the current
segmented facial structure work for a continuous mesh face by either
allowing the segments to map to an envelope of vertices or translate
directly to a set of displacers for the vertices that correspond to
the segments, also an envelope. (For my purposes, I would prefer that
the muscles of the face be modeled within the segment structure so
that percentages of contractions for each muscle could be specified
for any given expression.)

That, as with the continuous mesh, requires a good weighting engine
for the influence of overlapping envelopes or shared vertices. The
reason for focusing on facial gestures/expressions is that
talking-head windows are the most likely initial applications that
people in business or messaging systems (as opposed to using an SMIL
video component or otherwise synced video signal) will want to adopt
for animated human agents, or sales rep/CRM bots. If animation
encapsulation, or that particular part of it, could be advanced, that
would be nice.

Secondarily, as noted previously by Cindy I believe, UMEL needs to be
involved in setting up a library of gestural behaviors which can take
0 -1 decimal values for intensities. These behaviors probably should
not be collected until after the 2001 spec is in its final approval
phases, although it would be helpful to work on them as we go along.
I will certainly be doing that as I can.

( I have asked Mark Callow how he rated Max 4 for its upgraded
Character Studio, and he tells me that their upgraded bones (segments
for our purposes) system is definitely worth the upgrade, so I will
be doing that for my work in this area sometime in the month of
October. My plate is just too full with getting the class hierarchy
of HumanML (my area of specialty) in shape and handling organization
duties as designated secretary.)

The continous mesh for h-anim was the development I was waiting for
in order to move forward. In that meantime HumanML came along and
promises to give us the engine to drive X3D and handle all the
web-based object-swapping work at a level above the nuts and bolts we
work with.

  Now behaviors can be developed that won't be helplessly outdated by
a continous mesh spec. We already have the attachment points, and
modeling cloth has come a long way in the meantime, as has bandwidth
and processor power, so clothing, accessories, and avatar-wearable
gadgets are in the offing. 64-bit is in the pipeline now, and that is
the penultimate piece of the puzzle for a workable cyberspace as WE
see it. Whopptee-bleeping-dooo!

Ciao,
Rex

  At 10:03 AM +0100 8/22/01, James Smith wrote:
>Hi all,
>
>It seems to me that the thing that would be most useful to the HumanML
>people, as far as H-Anim is concerned, is the specification of the
>animation encapsulation method. Was any progress made on this at the
>SIGGRAPH meeting?
>
>Apologies for not being able to attend, by the way...
>
>cheers,
>--
>James Smith - Vapour Technology - james@vapourtech.com
>www: http://www.vapourtech.com/          ICQ: 53372434
>PGP: B42F 9ACD D166 018C 57F8 8698 098E FAA2 C03B C9ED
>======================================================
>"There's no explaining the things that might happen;
>  there's now a new home for technology in fashion."
>         - The Clint Boon Experience
>======================================================
>
>
>
>------------------------------------------------------------------
>To remove yourself from the h-anim list, mail the command
>`unsubscribe h-anim' to Majordomo@web3d.org


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 2
   Date: Wed, 22 Aug 2001 18:24:38 +0300
   From: "Manos Batsis" <m.batsis@bsnet.gr>
Subject: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)


Hallo Rex!

> -----Original Message-----
> From: Rex Brooks [mailto:rexb@starbourne.com]

> Actually, HumanML will benefit most from a spare, very functionally
> practical h-anim 2001 spec that incorporates a way to let the current
> segmented facial structure work for a continuous mesh face by either
> allowing the segments to map to an envelope of vertices or translate
> directly to a set of displacers for the vertices that correspond to
> the segments, also an envelope.

An excellent thought my friend, but out of our current context IMHO. My
sincere apologies for not jumping in with a more positive attitude on
this; I just believe we have a lot more pressing matters than rendering
issues (we are supposed to be presentation neutral ;-) this is for an
application to handle - Hey I sound like a tape again).
A far more pressing need is classification. Taxonomies/Ontologies are by
far the most essential thing that HumanML will be using (whatever aspect
of Human Topics those may cover).

Besides this getting extremely pressing (again, as I see it at least),
it gets even more complicated. The problem is overlapping efforts. For
example, the W3C Ontology group.

We face a rather critical faze here. We will have to give priorities to
short term work, while also planning long term targets.

What we actually need in short is progress leading to real applications.

I suggest we start discussion on future directions. My quarter of an
euro:

1) Study current non XML popular classification systems (from DDC[1] to
NAICS[2] to "Classification of Living Things"[4] (interesting, includes
humans) to whatever we would like to focus on by building modules). This
would provide us with a jumpstart on any topic we might want to get our
hands in. It will also help transitioning existing knowledge bases.

2) HumanML will be used mostly as metadata (let's not argue on the term
context now ;-). I think it is wise to formally import existing
namespaces (e.g. the Dublin Core [4]) to provide interoperability. Of
course this import can be hidden or even extended. For example, we can
define properties of type dc:creator with more specific context. This
can be done via rdfs:subPropertyOf [5] (thanks Sean! Will anyone believe
I had the absurd idea to use XML names for this functionality...
Sheesh). You got to love those OOP concepts in RDF.

3) Define requirements. Per module probably.

4) Develop our own classifications. Different people will head towards
different ways on this, but it is expected and wanted.

5) Start working on real applications. This sector is much dependent on
the above three.

Of course, we also have to establish a list of deliverables before the
above get resolved. One has already been proposed but I am sure we can
be more efficient.

As all of as have understand by now, HumanML will not be a mythical
schema that will solve everything. HumanML will be a set of modules (hey
we had figured that out from day one), build with a lot of work.

Since we are a formal TC now, it is time to toss ideas on the table,
sort them out and assign to formed working groups. This is where each
one will The first round of this procedure will probably take long
enough for every one to share his thoughts. Members that have not joined
yet, will have plenty of time to catch up.


[1] http://www.oclc.org/oclc/fp/index.htm
[2] http://www.census.gov/epcd/www/naics.html
[3] http://anthro.palomar.edu/animal/default.htm
[4] http://www.dublincore.org
[5] http://www.w3.org/TR/2000/CR-rdf-schema-20000327/#s2.3.3


Kindest regards,

Manos


________________________________________________________________________
________________________________________________________________________

Message: 3
   Date: Wed, 22 Aug 2001 09:12:51 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: Re: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)

Hi Manos,

The message in question was directed primarily at h-anim, and was
copied to the HumanML list. There are several phases going on at once
here. The one that this message deals with is the specific connection
between the X3D-h-anim effort which is at the level below ours but
just above actual applications and which absolutely must be in
agreement with ours if that specific use of HumanML will work in the
practical world.

As for HumanML, we are in a recruitment phase, and this is
tangentially related to that since I am hoping to recruit interest if
not actual participation from among this group.

At 6:24 PM +0300 8/22/01, Manos Batsis wrote:
>Hallo Rex!
>
>>  -----Original Message-----
>>  From: Rex Brooks [mailto:rexb@starbourne.com]
>
>>  Actually, HumanML will benefit most from a spare, very functionally
>>  practical h-anim 2001 spec that incorporates a way to let the current
>>  segmented facial structure work for a continuous mesh face by either
>>  allowing the segments to map to an envelope of vertices or translate
>>  directly to a set of displacers for the vertices that correspond to
>>  the segments, also an envelope.
>
>An excellent thought my friend, but out of our current context IMHO. My
>sincere apologies for not jumping in with a more positive attitude on
>this; I just believe we have a lot more pressing matters than rendering
>issues (we are supposed to be presentation neutral ;-) this is for an
>application to handle - Hey I sound like a tape again).
>A far more pressing need is classification. Taxonomies/Ontologies are by
>far the most essential thing that HumanML will be using (whatever aspect
>of Human Topics those may cover).
>
>Besides this getting extremely pressing (again, as I see it at least),
>it gets even more complicated. The problem is overlapping efforts. For
>example, the W3C Ontology group.
>
>We face a rather critical faze here. We will have to give priorities to
>short term work, while also planning long term targets.

I read you five by five. This work is actually well underway. I won't
belabor the details, but real work is being done.

>What we actually need in short is progress leading to real applications.
>
>I suggest we start discussion on future directions. My quarter of an
>euro:
>
>1) Study current non XML popular classification systems (from DDC[1] to
>NAICS[2] to "Classification of Living Things"[4] (interesting, includes
>humans) to whatever we would like to focus on by building modules). This
>would provide us with a jumpstart on any topic we might want to get our
>hands in. It will also help transitioning existing knowledge bases.

Good thought. No disagreement here.

>2) HumanML will be used mostly as metadata (let's not argue on the term
>context now ;-). I think it is wise to formally import existing
>namespaces (e.g. the Dublin Core [4]) to provide interoperability. Of
>course this import can be hidden or even extended. For example, we can
>define properties of type dc:creator with more specific context. This
>can be done via rdfs:subPropertyOf [5] (thanks Sean! Will anyone believe
>I had the absurd idea to use XML names for this functionality...
>Sheesh). You got to love those OOP concepts in RDF.

Yes, thanks Sean, thanks Manos, from absurd ideas concrete realities
grow, even if other than initially intended--as long as they work, of
course.


>3) Define requirements. Per module probably.

Just so you know, I think we need to get this copied over to the
OASIS lists, since actual operation gets underway today. Nice timing.

>4) Develop our own classifications. Different people will head towards
>different ways on this, but it is expected and wanted.

No problem. Heck, I've even got an ARGOUML project devoted to it. I
did include you on the list of folks I wanted to join. Hint.

>5) Start working on real applications. This sector is much dependent on
>the above three.

This is underway as well, as this message is about as real world as
you can get if you think talking head bots are about as no-brainer as
apps get--across the retail board for online transactions. They will
be de rigeur--just good manners, as necessary as a website once one
gets up and running. Would you want your competitor to have a
seriously polite bot that you didn't talking to your customers when
you couldn't?

>Of course, we also have to establish a list of deliverables before the
>above get resolved. One has already been proposed but I am sure we can
>be more efficient.

Check out the website. Deliverables are there.

>As all of as have understand by now, HumanML will not be a mythical
>schema that will solve everything. HumanML will be a set of modules (hey
>we had figured that out from day one), build with a lot of work.
>
>Since we are a formal TC now, it is time to toss ideas on the table,
>sort them out and assign to formed working groups. This is where each
>one will The first round of this procedure will probably take long
>enough for every one to share his thoughts. Members that have not joined
>yet, will have plenty of time to catch up.

Yippee YO KYE YAY!

>
>
>[1] http://www.oclc.org/oclc/fp/index.htm
>[2] http://www.census.gov/epcd/www/naics.html
>[3] http://anthro.palomar.edu/animal/default.htm
>[4] http://www.dublincore.org
>[5] http://www.w3.org/TR/2000/CR-rdf-schema-20000327/#s2.3.3
>
>
>Kindest regards,
>
>Manos


Ciao,
Rex



--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 4
   Date: Wed, 22 Aug 2001 11:21:23 -0500
   From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
Subject: RE: Classification (WAS: RE: Re: [h-anim] HumanML T houghts)

Are you starting clean sheet or leveraging existing work?

As the research done by Niclas and Andrew
showed, H-anim has a displacer node but it had
some technical difficulties.  Cindy Ballreich
says, work on a 2001 spec will improve that functionality.

The critical path has to include some application
solutions, so the work with H-anim is appropriate.  I
suggest that Adobe and Macromedia systems also be used
because they have significant desktop penetration and
already work with XML systems.  Check with the
Contact consortium.

Len
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Manos Batsis [mailto:m.batsis@bsnet.gr]

As all of as have understand by now, HumanML will not be a mythical
schema that will solve everything. HumanML will be a set of modules (hey
we had figured that out from day one), build with a lot of work.

Since we are a formal TC now, it is time to toss ideas on the table,
sort them out and assign to formed working groups. This is where each
one will The first round of this procedure will probably take long
enough for every one to share his thoughts. Members that have not joined
yet, will have plenty of time to catch up.


________________________________________________________________________
________________________________________________________________________

Message: 5
   Date: Wed, 22 Aug 2001 10:20:59 -0700
   From: clayton cottingham <drfrog@smartt.com>
Subject: Re: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)

Len:

what about using HumanML
as an HLAL for processing to other than h-anim based vrml?

can i give you an idea of what im trying to do?

i want to be able to generate
an html design or web3d design for searching and browsing

think of something like  ebay
where you have
simple and adv search
and  category browsing

i dont want to dupe work
{heck im lazy ok!}
and id like people to be able to
choose between 2d or 3d

so you can see why i want to use a HLAL

if im able to design in another language and have it parsed to html or
vrml id be a happy camper

if you think humanml is not quite what id need could you suggest
a) another language
b) how id go about designing something like this

it doesnt have to be usable, at first ,
by anyone other than me

i think this could become very useful


________________________________________________________________________
________________________________________________________________________

Message: 6
   Date: Wed, 22 Aug 2001 13:01:25 -0500
   From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
Subject: RE: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)

I'm not quite sure what you are asking, Nic.

HumanML should be useful for any application where a
human profile can be applied (real or artificial).  The fact that we
started with expressions and emotions is tangential
to that aspect.  You are simply doing what any
use case design does; creating a demographic
and processing based on the classes or property
set values.

The same kind of stuff shows up in say your
local school database for classifying students,
in your law enforcement database for modus operandi
and pattern prediction (for example, child
molesters are very predictable thank god,
and the rules for catching them don't involve
a lot more than a map of occurrences and some rules).
Criminology has these rules in spades which is
why detectives are good at what they do and why
international orgs such as Interpol track terrorists
with a lot of success.  It is also, I suspect, why
you see HumanML on .gov lists as something to be
kept up with.

A HumanML database depends on the properties
you want to capture.  We initially spec'd a set
of abstract categories for grouping these based
on very high level classifications of properties
that influence human communications.  Depending
on how good a job we did, other types can be
derived for these.   XML Schema is really easier
than it looks once you master the type trees.
Namespaces are another issue but not today. :-)

As far as HLAL processing goes, it is simply
a matter of picking out properties that are
invariant or transformable (input to output
through conditional).  Reasoning-based systems
based on chaining (see prolog) are a little
more complicated but not much.  Any old school
AI book has the details.

Probably the hardest task is building the
XSLT templates.  The categorization based
on an XML Schema is a slam dunk once you
do one or two of them.   Start with a
very simple human profile, then decide
what you want the output to look like
and more importantly, to do.  XSLT is
easy too if you stick to XML output
so you can build templates that have
have lots of defaults.   The bigger
problem for HumanML and visualization
is specifying the behaviors you want.
As Cindy said, H-anim specs a human
form and motion, but none of the behaviors
per se.  For that, your library can end
up being the stuff in the XSLT scripts
or you will want to get it from a library.
Think of how apps like V-Realm Builder and
the others had canned behaviors for motion.
Using HumanML descriptions, you can
specialize those because what the ML is
telling you at a higher level is the
particular values of this instance of
human object.

Otherwise, a HLAL is simply a high level
description of something for which you
will introduce the particulars by transformation.
This is standard old style stuff folks did
with SGML for years and do now with XML
and servers.  No magic.

Len
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: clayton cottingham [mailto:drfrog@smartt.com]

Len:

what about using HumanML
as an HLAL for processing to other than h-anim based vrml?

can i give you an idea of what im trying to do?

i want to be able to generate
an html design or web3d design for searching and browsing

think of something like  ebay
where you have
simple and adv search
and  category browsing

i dont want to dupe work
{heck im lazy ok!}
and id like people to be able to
choose between 2d or 3d

so you can see why i want to use a HLAL

if im able to design in another language and have it parsed to html or
vrml id be a happy camper

if you think humanml is not quite what id need could you suggest
a) another language
b) how id go about designing something like this

it doesnt have to be usable, at first ,
by anyone other than me

i think this could become very useful


________________________________________________________________________
________________________________________________________________________

Message: 7
   Date: Wed, 22 Aug 2001 13:23:49 -0500
   From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
Subject: RE: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)

My bad.  That was from Clayton.

Too much bleu cheese at lunch.

Len
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Bullard, Claude L (Len)
Sent: Wednesday, August 22, 2001 1:01 PM
To: humanmarkup@yahoogroups.com
Subject: RE: Classification (WAS: RE: [humanmarkup] Re: [h-anim] HumanML
Thoughts)


I'm not quite sure what you are asking, Nic.


________________________________________________________________________
________________________________________________________________________

Message: 8
   Date: Wed, 22 Aug 2001 20:50:10 +0200
   From: Niclas Olofsson <gurun@acc.umu.se>
Subject: Re: [h-anim] HumanML Thoughts

Hi Rex,

Rex Brooks wrote:
> In that meantime HumanML came along and promises to give us the engine to
drive X3D
> and handle all the web-based object-swapping work at a level above the
nuts and
> bolts we work with.

I was under the impression that HumanML does nothing of the sorts. IMHO
HumanML has diverted way to much into annotations, definitions, to ever
be able to deliver a runtime environment or even a simple event model.
Years ago Len started mumble something about human profiles as a "means
of configuration for avatars" (correct me if I'm _completely_ wrong
len). HumanML comes close to this I think, or at least it could (still
have to be proven). For some real groundbreaking work in the VRML arena
look at simple BOMU Avatars. Patch BOMU with something that does to
Avatars and HumanML what BizTalk and SOAP does to internet business.

Eearlier I asked Len to contact me when the time comes for use to do
some serious digging into this. I only did simple prototyping of this,
but the result had me convinced that it's worth exploring further.

Human markup is listening in on this tread I notice. I think a lot of
them are pretty fed up with us VRML'ers and our crazy ideas (<g> we can
hardly stand ourselves). Funny that both H-Anim and HumanML repeat the
same message: Interesting, have no idea of what you are talking about,
please come back later when perhaps we have some spare time over to
discuss it.

Another point I want to make. Don't go down the translation path. It
offers nothing but static content. IMHO, if you manage to translate
(process) HumanML to H-Anim one of the groups have failed, and I don't
think it's H-Anim.

Cheers,
/Niclas


________________________________________________________________________
________________________________________________________________________

Message: 9
   Date: Wed, 22 Aug 2001 20:53:24 +0200
   From: Niclas Olofsson <gurun@acc.umu.se>
Subject: Re: Classification (WAS: RE: Re: [h-anim] HumanML Thoughts)


tihi, neither do I. Didn't write it :-)

/Niclas

"Bullard, Claude L (Len)" wrote:
>
> I'm not quite sure what you are asking, Nic.
>
> HumanML should be useful for any application where a
> human profile can be applied (real or artificial).  The fact that we
> started with expressions and emotions is tangential
> to that aspect.  You are simply doing what any
> use case design does; creating a demographic
> and processing based on the classes or property
> set values.
>
> The same kind of stuff shows up in say your
> local school database for classifying students,
> in your law enforcement database for modus operandi
> and pattern prediction (for example, child
> molesters are very predictable thank god,
> and the rules for catching them don't involve
> a lot more than a map of occurrences and some rules).
> Criminology has these rules in spades which is
> why detectives are good at what they do and why
> international orgs such as Interpol track terrorists
> with a lot of success.  It is also, I suspect, why
> you see HumanML on .gov lists as something to be
> kept up with.
>
> A HumanML database depends on the properties
> you want to capture.  We initially spec'd a set
> of abstract categories for grouping these based
> on very high level classifications of properties
> that influence human communications.  Depending
> on how good a job we did, other types can be
> derived for these.   XML Schema is really easier
> than it looks once you master the type trees.
> Namespaces are another issue but not today. :-)
>
> As far as HLAL processing goes, it is simply
> a matter of picking out properties that are
> invariant or transformable (input to output
> through conditional).  Reasoning-based systems
> based on chaining (see prolog) are a little
> more complicated but not much.  Any old school
> AI book has the details.
>
> Probably the hardest task is building the
> XSLT templates.  The categorization based
> on an XML Schema is a slam dunk once you
> do one or two of them.   Start with a
> very simple human profile, then decide
> what you want the output to look like
> and more importantly, to do.  XSLT is
> easy too if you stick to XML output
> so you can build templates that have
> have lots of defaults.   The bigger
> problem for HumanML and visualization
> is specifying the behaviors you want.
> As Cindy said, H-anim specs a human
> form and motion, but none of the behaviors
> per se.  For that, your library can end
> up being the stuff in the XSLT scripts
> or you will want to get it from a library.
> Think of how apps like V-Realm Builder and
> the others had canned behaviors for motion.
> Using HumanML descriptions, you can
> specialize those because what the ML is
> telling you at a higher level is the
> particular values of this instance of
> human object.
>
> Otherwise, a HLAL is simply a high level
> description of something for which you
> will introduce the particulars by transformation.
> This is standard old style stuff folks did
> with SGML for years and do now with XML
> and servers.  No magic.
>
> Len
> http://www.mp3.com/LenBullard
>
> Ekam sat.h, Vipraah bahudhaa vadanti.
> Daamyata. Datta. Dayadhvam.h
>
> -----Original Message-----
> From: clayton cottingham [mailto:drfrog@smartt.com]
>
> Len:
>
> what about using HumanML
> as an HLAL for processing to other than h-anim based vrml?
>
> can i give you an idea of what im trying to do?
>
> i want to be able to generate
> an html design or web3d design for searching and browsing
>
> think of something like  ebay
> where you have
> simple and adv search
> and  category browsing
>
> i dont want to dupe work
> {heck im lazy ok!}
> and id like people to be able to
> choose between 2d or 3d
>
> so you can see why i want to use a HLAL
>
> if im able to design in another language and have it parsed to html or
> vrml id be a happy camper
>
> if you think humanml is not quite what id need could you suggest
> a) another language
> b) how id go about designing something like this
>
> it doesnt have to be usable, at first ,
> by anyone other than me
>
> i think this could become very useful
>
>
> To unsubscribe send an email to:
> humanmarkup-unsubscribe@yahoogroups.com
>
>
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/

--
Niclas Olofsson - http://www.ismobile.com
Product Development, isMobile, Aurorum 2, S-977 75 Luleå, Sweden
Phone: +46(0)920-75550
Mobile: +46(0)70-3726404


________________________________________________________________________
________________________________________________________________________

Message: 10
   Date: Wed, 22 Aug 2001 14:56:46 -0500
   From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
Subject: RE: Re: [h-anim] HumanML Thoughts

The last set of emails not copied to all lists in Niclas's reply
because i am not subscribed to all.

Yes, for me HumanML was/is about getting a human
expression language for VR that is easier than
creating displacers by hand.  Others have different
interests and since I am an old markup hand, I know
that means a fairly abstract design that can be
reused or reapplied:  hence, XML and all the nonsense
of using metalanguage definitions.

We considered HumanML engines and talked about the
routes as a means to model emotions (an old idea
from the scene graph community).  We discussed
profiles, transforms, etc.   All doable.  You
can use transformation and it will animate,
but you still the API to do real time interaction.
You could also use the HumanML for creating messages
(the SOAP or RPC model).

Since a lot of the primary active members
of the HumanMarkup group are VRMLers, it still seems
a reasonable dicussion.  All some are doing now is
opening the discussions up to other groups and
asking if they are interesting in using HumanML,
and if so, to join OASIS.  By the time this is
done, it may be a very different beast if it
doesn't die in the official contacts miasma.

I do miss the days when a good idea was the
main requirement for cooperation.  Now we
need a Consortium and press and membership
fees and official processes and .... boredom.

So much for the fun.

Len
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Niclas Olofsson [mailto:gurun@acc.umu.se]

I was under the impression that HumanML does nothing of the sorts. IMHO
HumanML has diverted way to much into annotations, definitions, to ever
be able to deliver a runtime environment or even a simple event model.
Years ago Len started mumble something about human profiles as a "means
of configuration for avatars" (correct me if I'm _completely_ wrong
len). HumanML comes close to this I think, or at least it could (still
have to be proven). For some real groundbreaking work in the VRML arena
look at simple BOMU Avatars. Patch BOMU with something that does to
Avatars and HumanML what BizTalk and SOAP does to internet business.

Eearlier I asked Len to contact me when the time comes for use to do
some serious digging into this. I only did simple prototyping of this,
but the result had me convinced that it's worth exploring further.

Human markup is listening in on this tread I notice. I think a lot of
them are pretty fed up with us VRML'ers and our crazy ideas (<g> we can
hardly stand ourselves). Funny that both H-Anim and HumanML repeat the
same message: Interesting, have no idea of what you are talking about,
please come back later when perhaps we have some spare time over to
discuss it.

Another point I want to make. Don't go down the translation path. It
offers nothing but static content. IMHO, if you manage to translate
(process) HumanML to H-Anim one of the groups have failed, and I don't
think it's H-Anim.


________________________________________________________________________
________________________________________________________________________

Message: 11
   Date: Wed, 22 Aug 2001 14:48:32 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: RE: Re: [h-anim] HumanML Thoughts

This is more for Niclas than others, but replying is easier with the
overload I'm getting today.

Nic, the time is shortly to arrive for just that exploration. There
is a thread with NormBadler at UPenn's Human Simulation Group with
Matt Beitler et al,  that I will eventually get into some kind of
presentable form for both HumanML and H-Anim--using Laban Movement
Analysis and Badler's EMOTE engine that fills the bill as far as I
can see right now.

Ciao,
Rex

At 2:56 PM -0500 8/22/01, Bullard, Claude L (Len) wrote:
>The last set of emails not copied to all lists in Niclas's reply
>because i am not subscribed to all.
>
>Yes, for me HumanML was/is about getting a human
>expression language for VR that is easier than
>creating displacers by hand.  Others have different
>interests and since I am an old markup hand, I know
>that means a fairly abstract design that can be
>reused or reapplied:  hence, XML and all the nonsense
>of using metalanguage definitions.
>
>We considered HumanML engines and talked about the
>routes as a means to model emotions (an old idea
>from the scene graph community).  We discussed
>profiles, transforms, etc.   All doable.  You
>can use transformation and it will animate,
>but you still the API to do real time interaction.
>You could also use the HumanML for creating messages
>(the SOAP or RPC model).
>
>Since a lot of the primary active members
>of the HumanMarkup group are VRMLers, it still seems
>a reasonable dicussion.  All some are doing now is
>opening the discussions up to other groups and
>asking if they are interesting in using HumanML,
>and if so, to join OASIS.  By the time this is
>done, it may be a very different beast if it
>doesn't die in the official contacts miasma.
>
>I do miss the days when a good idea was the
>main requirement for cooperation.  Now we
>need a Consortium and press and membership
>fees and official processes and .... boredom.
>
>So much for the fun.
>
>Len
>http://www.mp3.com/LenBullard
>
>Ekam sat.h, Vipraah bahudhaa vadanti.
>Daamyata. Datta. Dayadhvam.h
>
>
>-----Original Message-----
>From: Niclas Olofsson [mailto:gurun@acc.umu.se]
>
>I was under the impression that HumanML does nothing of the sorts. IMHO
>HumanML has diverted way to much into annotations, definitions, to ever
>be able to deliver a runtime environment or even a simple event model.
>Years ago Len started mumble something about human profiles as a "means
>of configuration for avatars" (correct me if I'm _completely_ wrong
>len). HumanML comes close to this I think, or at least it could (still
>have to be proven). For some real groundbreaking work in the VRML arena
>look at simple BOMU Avatars. Patch BOMU with something that does to
>Avatars and HumanML what BizTalk and SOAP does to internet business.
>
>Eearlier I asked Len to contact me when the time comes for use to do
>some serious digging into this. I only did simple prototyping of this,
>but the result had me convinced that it's worth exploring further.
>
>Human markup is listening in on this tread I notice. I think a lot of
>them are pretty fed up with us VRML'ers and our crazy ideas (<g> we can
>hardly stand ourselves). Funny that both H-Anim and HumanML repeat the
>same message: Interesting, have no idea of what you are talking about,
>please come back later when perhaps we have some spare time over to
>discuss it.
>
>Another point I want to make. Don't go down the translation path. It
>offers nothing but static content. IMHO, if you manage to translate
>(process) HumanML to H-Anim one of the groups have failed, and I don't
>think it's H-Anim.
>
>
>To unsubscribe send an email to:
>humanmarkup-unsubscribe@yahoogroups.com
>
>
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 12
   Date: Wed, 22 Aug 2001 15:09:53 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: Re: [h-anim] HumanML Thoughts

Hi Nic,

This isn't the MAIN interest of HumanML, just something it will do,
and handily, for those who want to use it for that, and probably more
simply than all this email I've been chasing down all day. My
previous post aimed at you came in response to an email later than
this one. I can't hope to catch up, but I will endeavor to make one
thing clear.

This focus on avatars and X3D is important to me personally, but not
to HumanML overall. It is just one of the things it can do, and far
from the most significant at least as things stand now. And it isn't
hard to do right now, but as with UMEL, getting some collaborative
work done on some PROTOs for behaviors that we can all use makes
sense to me. Neither HumanML nor H-anim needs each other to do this
on their own, it would just be easier to have libraries.

Forgive me, I really didn't think this much hullabaloo would pop up
all at once. I'm supposed to be working on HumanML stuff today, and,
being on the left coast out here, I will after everyone else goes to
bed.

Later,
Rex

At 8:50 PM +0200 8/22/01, Niclas Olofsson wrote:
>Hi Rex,
>
>Rex Brooks wrote:
>>  In that meantime HumanML came along and promises to give us the
>>engine to drive X3D
>>  and handle all the web-based object-swapping work at a level above
>>the nuts and
>>  bolts we work with.
>
>I was under the impression that HumanML does nothing of the sorts. IMHO
>HumanML has diverted way to much into annotations, definitions, to ever
>be able to deliver a runtime environment or even a simple event model.
>Years ago Len started mumble something about human profiles as a "means
>of configuration for avatars" (correct me if I'm _completely_ wrong
>len). HumanML comes close to this I think, or at least it could (still
>have to be proven). For some real groundbreaking work in the VRML arena
>look at simple BOMU Avatars. Patch BOMU with something that does to
>Avatars and HumanML what BizTalk and SOAP does to internet business.
>
>Eearlier I asked Len to contact me when the time comes for use to do
>some serious digging into this. I only did simple prototyping of this,
>but the result had me convinced that it's worth exploring further.
>
>Human markup is listening in on this tread I notice. I think a lot of
>them are pretty fed up with us VRML'ers and our crazy ideas (<g> we can
>hardly stand ourselves). Funny that both H-Anim and HumanML repeat the
>same message: Interesting, have no idea of what you are talking about,
>please come back later when perhaps we have some spare time over to
>discuss it.
>
>Another point I want to make. Don't go down the translation path. It
>offers nothing but static content. IMHO, if you manage to translate
>(process) HumanML to H-Anim one of the groups have failed, and I don't
>think it's H-Anim.
>
>Cheers,
>/Niclas
>
>
>To unsubscribe send an email to:
>humanmarkup-unsubscribe@yahoogroups.com
>
>
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 13
   Date: Thu, 23 Aug 2001 01:58:02 +0200
   From: Niclas Olofsson <gurun@acc.umu.se>
Subject: Re: Re: [h-anim] HumanML Thoughts

<?xml version="1.0" ?>

Rex Brooks wrote:
> Nic, the time is shortly to arrive for just that exploration. There
> is a thread with NormBadler at UPenn's Human Simulation Group with
> Matt Beitler et al,  that I will eventually get into some kind of
> presentable form for both HumanML and H-Anim--using Laban Movement
> Analysis and Badler's EMOTE engine that fills the bill as far as I
> can see right now.

Very interesting, but nope, that alone will not do it. A very good
starting point though. Looking at EMOTE it appears to me as yet another
level of abstraction that perhaps would make things easier. It can
perhaps provide a level of abstraction above FAP's and provide H-Anim
(or whatever human animation format) with a somewhat more dynamic
presentation. In the same time it provides authors and computers with a
more fuzzy means of communications.

But (a big BUT), in regular software design terms, most of this stuff
belongs in the outermost presentation layer. EMOTE gets close to filling
in as the presentation logic (backed up by h-anim representation). I'm
looking for the layer beneath it, the business logic of human
communication. Does it make any sense? Probably not. But I do collective
design. The system we are building right now took me since january to
design, but we build the core in only 3 weeks. I think this will work
pretty much the same, only it will take a couple of years instead. If
this where ready for prime time I'd be the first to start a task force
around it. But it isn't. It will take years. And I'll be there then.
Waiting. After all, this is what MY life is all about. I'm 30 today. I
have time :-)

Cheers,
/Niclas


________________________________________________________________________
________________________________________________________________________

Message: 14
   Date: Wed, 22 Aug 2001 20:01:53 -0400
   From: "Ranjeeth Kumar Thunga" <rkthunga@humanmarkup.org>
Subject: Lists


It's good that we've begun serious discussion again (which I'll get a chance
to address in my next post), but for now, let's make sure we are all on the
same page regarding the 3 different mailing lists that now exist.

(**BEFORE submitting, you must join the lists.  Instructions are available
at http://www.oasis-open.org/committees/humanmarkup)

---------------------------------------

1)
humanmarkup-comment@lists.oasis-open.org -- This is the General Comment
Discussion.  It is open to _anyone_ who has suggestions, questions,
comments, issues, etc. regarding the project.  All parties are invited to
freely join this discussion group, and everyone can freely contribute to the
dicsussion on this group.

This is our  _primary_ discussion list, until our final TC is formed and
agenda established.

---------------------------------------

2)
humanmarkup@lists.oasis-open.org -- This is the Technical Committee
Discussion Group, intended to discuss the technical deliverables of
HumanMarkup.  I will check with Karl to see if all interested parties can
join the list, but for now, there is no need to post here (we aren't at that
stage yet).  We may "cross post" to this list, but we should stick with the
humanmarkup-comment@lists-oasis-open.org list UNTIL we get our active
membership list established.

----------------------------------

humanmarkup@yahogroups.com -- This is the original Phase 0 Discussion List.
It is still active, but it is being deprecated.  At this point, we may
"cross post" to this list for the next couple of weeks, but should no longer
post to this Discussion List, unless there is some specific issue one feels
should not be addressed on OASIS for whatever reason.

The archives at http://groups.yahoo.com/group/humanmarkup will remain
available for posterity.

-----------------------------------
Please contact me with any questions, comments, concerns.  You can reach me
by phone or email anytime.
Take care,

Ranjeeth Kumar Thunga
HumanMarkup Chair
rkthunga@humanmarkup.org
(646) 456-9076


[This message contained attachments]



________________________________________________________________________
________________________________________________________________________

Message: 15
   Date: Wed, 22 Aug 2001 17:36:33 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: Re: Re: [h-anim] HumanML Thoughts

Lucky You.  I'm not quite so lucky, but I expect to see it in about
the timeline you suggest. When I said shortly, I meant that beginning
work on it was in the next year from now--about the time I expect to
see fairly mature displacers, weighting algorithms, and nurbs
calculations for crease angles in H-Anim, and the beginning of
radiosity lighting engines in X3D. About the same time frame for
64-bit Itanium to become desktop reality--maybe a little more. that
makes it 18-24 months or so.

I sure hope it is still <?xml version="1.0"> and not 1.1 or 1.2. But
then, I didn't think Blueberry was worth all the fuss, so that shows
what I know.

Ciao,
Rex

At 1:58 AM +0200 8/23/01, Niclas Olofsson wrote:
><?xml version="1.0" ?>
>
>Rex Brooks wrote:
>>  Nic, the time is shortly to arrive for just that exploration. There
>>  is a thread with NormBadler at UPenn's Human Simulation Group with
>>  Matt Beitler et al,  that I will eventually get into some kind of
>>  presentable form for both HumanML and H-Anim--using Laban Movement
>>  Analysis and Badler's EMOTE engine that fills the bill as far as I
>>  can see right now.
>
>Very interesting, but nope, that alone will not do it. A very good
>starting point though. Looking at EMOTE it appears to me as yet another
>level of abstraction that perhaps would make things easier. It can
>perhaps provide a level of abstraction above FAP's and provide H-Anim
>(or whatever human animation format) with a somewhat more dynamic
>presentation. In the same time it provides authors and computers with a
>more fuzzy means of communications.
>
>But (a big BUT), in regular software design terms, most of this stuff
>belongs in the outermost presentation layer. EMOTE gets close to filling
>in as the presentation logic (backed up by h-anim representation). I'm
>looking for the layer beneath it, the business logic of human
>communication. Does it make any sense? Probably not. But I do collective
>design. The system we are building right now took me since january to
>design, but we build the core in only 3 weeks. I think this will work
>pretty much the same, only it will take a couple of years instead. If
>this where ready for prime time I'd be the first to start a task force
>around it. But it isn't. It will take years. And I'll be there then.
>Waiting. After all, this is what MY life is all about. I'm 30 today. I
>have time :-)
>
>Cheers,
>/Niclas
>
>
>To unsubscribe send an email to:
>humanmarkup-unsubscribe@yahoogroups.com
>
>
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 16
   Date: Wed, 22 Aug 2001 21:27:44 -0400
   From: "Ranjeeth Kumar Thunga" <rkthunga@interposting.com>
Subject: Re: Re: [h-anim] HumanML Thoughts


I think that Carol 'probably' doesn't belong on this email thread ;)

Ranjeeth Kumar thunga

----- Original Message -----
From: "Rex Brooks" <rexb@starbourne.com>
To: <humanmarkup@yahoogroups.com>
Cc: "James Smith" <james@vapourtech.com>; <carol.geyer@oasis-open.org>
Sent: Wednesday, August 22, 2001 8:36 PM
Subject: Re: [humanmarkup] Re: [h-anim] HumanML Thoughts


>
> Lucky You.  I'm not quite so lucky, but I expect to see it in about
> the timeline you suggest. When I said shortly, I meant that beginning
> work on it was in the next year from now--about the time I expect to
> see fairly mature displacers, weighting algorithms, and nurbs
> calculations for crease angles in H-Anim, and the beginning of
> radiosity lighting engines in X3D. About the same time frame for
> 64-bit Itanium to become desktop reality--maybe a little more. that
> makes it 18-24 months or so.
>
> I sure hope it is still <?xml version="1.0"> and not 1.1 or 1.2. But
> then, I didn't think Blueberry was worth all the fuss, so that shows
> what I know.
>
> Ciao,
> Rex
>
> At 1:58 AM +0200 8/23/01, Niclas Olofsson wrote:
> ><?xml version="1.0" ?>
> >
> >Rex Brooks wrote:
> >>  Nic, the time is shortly to arrive for just that exploration. There
> >>  is a thread with NormBadler at UPenn's Human Simulation Group with
> >>  Matt Beitler et al,  that I will eventually get into some kind of
> >>  presentable form for both HumanML and H-Anim--using Laban Movement
> >>  Analysis and Badler's EMOTE engine that fills the bill as far as I
> >>  can see right now.
> >
> >Very interesting, but nope, that alone will not do it. A very good
> >starting point though. Looking at EMOTE it appears to me as yet another
> >level of abstraction that perhaps would make things easier. It can
> >perhaps provide a level of abstraction above FAP's and provide H-Anim
> >(or whatever human animation format) with a somewhat more dynamic
> >presentation. In the same time it provides authors and computers with a
> >more fuzzy means of communications.
> >
> >But (a big BUT), in regular software design terms, most of this stuff
> >belongs in the outermost presentation layer. EMOTE gets close to filling
> >in as the presentation logic (backed up by h-anim representation). I'm
> >looking for the layer beneath it, the business logic of human
> >communication. Does it make any sense? Probably not. But I do collective
> >design. The system we are building right now took me since january to
> >design, but we build the core in only 3 weeks. I think this will work
> >pretty much the same, only it will take a couple of years instead. If
> >this where ready for prime time I'd be the first to start a task force
> >around it. But it isn't. It will take years. And I'll be there then.
> >Waiting. After all, this is what MY life is all about. I'm 30 today. I
> >have time :-)
> >
> >Cheers,
> >/Niclas
> >
> >
> >To unsubscribe send an email to:
> >humanmarkup-unsubscribe@yahoogroups.com
> >
> >
> >
> >Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>
> --
> Rex Brooks
> GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
> W3Address: http://www.starbourne.com
> Email: rexb@starbourne.com
> Tel: 510-849-2309
> Fax: By Request
>
>
> To unsubscribe send an email to:
> humanmarkup-unsubscribe@yahoogroups.com
>
>
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>



________________________________________________________________________
________________________________________________________________________

Message: 17
   Date: Wed, 22 Aug 2001 19:21:13 -0700
   From: Rex Brooks <rexb@starbourne.com>
Subject: Re: Re: [h-anim] HumanML Thoughts

You're right. I will see to it that this doesn't recur, at least as I can.

<chagrin>Oops</chagrin>
Rex

>I think that Carol 'probably' doesn't belong on this email thread ;)
>
>Ranjeeth Kumar thunga
>
>----- Original Message -----
>From: "Rex Brooks" <rexb@starbourne.com>
>To: <humanmarkup@yahoogroups.com>
>Cc: "James Smith" <james@vapourtech.com>; <carol.geyer@oasis-open.org>
>Sent: Wednesday, August 22, 2001 8:36 PM
>Subject: Re: [humanmarkup] Re: [h-anim] HumanML Thoughts
>
>
>>
>>  Lucky You.  I'm not quite so lucky, but I expect to see it in about
>>  the timeline you suggest. When I said shortly, I meant that beginning
>>  work on it was in the next year from now--about the time I expect to
>>  see fairly mature displacers, weighting algorithms, and nurbs
>>  calculations for crease angles in H-Anim, and the beginning of
>>  radiosity lighting engines in X3D. About the same time frame for
>>  64-bit Itanium to become desktop reality--maybe a little more. that
>>  makes it 18-24 months or so.
>>
>>  I sure hope it is still <?xml version="1.0"> and not 1.1 or 1.2. But
>>  then, I didn't think Blueberry was worth all the fuss, so that shows
>>  what I know.
>>
>>  Ciao,
>>  Rex
>>
>>  At 1:58 AM +0200 8/23/01, Niclas Olofsson wrote:
>>  ><?xml version="1.0" ?>
>>  >
>>  >Rex Brooks wrote:
>>  >>  Nic, the time is shortly to arrive for just that exploration. There
>>  >>  is a thread with NormBadler at UPenn's Human Simulation Group with
>>  >>  Matt Beitler et al,  that I will eventually get into some kind of
>>  >>  presentable form for both HumanML and H-Anim--using Laban Movement
>>  >>  Analysis and Badler's EMOTE engine that fills the bill as far as I
>>  >>  can see right now.
>>  >
>>  >Very interesting, but nope, that alone will not do it. A very good
>>  >starting point though. Looking at EMOTE it appears to me as yet another
>>  >level of abstraction that perhaps would make things easier. It can
>>  >perhaps provide a level of abstraction above FAP's and provide H-Anim
>>  >(or whatever human animation format) with a somewhat more dynamic
>>  >presentation. In the same time it provides authors and computers with a
>>  >more fuzzy means of communications.
>>  >
>>  >But (a big BUT), in regular software design terms, most of this stuff
>>  >belongs in the outermost presentation layer. EMOTE gets close to
filling
>>  >in as the presentation logic (backed up by h-anim representation). I'm
>>  >looking for the layer beneath it, the business logic of human
>>  >communication. Does it make any sense? Probably not. But I do
collective
>>  >design. The system we are building right now took me since january to
>>  >design, but we build the core in only 3 weeks. I think this will work
>>  >pretty much the same, only it will take a couple of years instead. If
>>  >this where ready for prime time I'd be the first to start a task force
>>  >around it. But it isn't. It will take years. And I'll be there then.
>>  >Waiting. After all, this is what MY life is all about. I'm 30 today. I
>>  >have time :-)
>>  >
>>  >Cheers,
>>  >/Niclas
>>  >
>>  >
>>  >To unsubscribe send an email to:
>>  >humanmarkup-unsubscribe@yahoogroups.com
>>  >
>>  >
>>  >
>>  >Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
>>
>>
>>  --
>>  Rex Brooks
>>  GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
>>  W3Address: http://www.starbourne.com
>>  Email: rexb@starbourne.com
>>  Tel: 510-849-2309
>>  Fax: By Request
>>
>>
>>  To unsubscribe send an email to:
>>  humanmarkup-unsubscribe@yahoogroups.com
>>
>>
>>
>>  Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
>>
>>
>
>
>
>To unsubscribe send an email to:
>humanmarkup-unsubscribe@yahoogroups.com
>
>
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/


--
Rex Brooks
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
W3Address: http://www.starbourne.com
Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By Request


________________________________________________________________________
________________________________________________________________________

Message: 18
   Date: Thu, 23 Aug 2001 00:28:31 -0700
   From: clayton <drfrog@smartt.com>
Subject: Slashdot: Human Markup Language


Slashdot article on the efforts!



http://slashdot.org/article.pl?sid=01/08/22/1936200&mode=thread



________________________________________________________________________
________________________________________________________________________



Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/






[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC