OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: RE: HM.interactions: initial questions



Hi Niclas, Mark,

> -----Original Message-----
> From: Niclas Olofsson [mailto:gurun@acc.umu.se]


> Mark Brownell wrote:

> > So the bottom line for this idea is to provide the information to a
commonly
> > used rendering machine

I am not sure if I understand what you mean by "rendering machine". If I
am getting this right though:

The SW and HumanML in particular can help managing data and form useful
information out of them and I am sure that we will be introduced to more
complex, rich GUIs, possibly very dependent on the markup models
themselves.

Examples may include queries in natural language (both written and
spoken), intelligent navigation, complex agent interfaces etc.

But one simply cannot predict those.


> > Has anyone provided a reasonable selection
> > process that determines what will end up being rendered on the
user's
> > machine?

IMHO, that should be handled by the application and has nothing to do
with HumanML.

 
> What do you suggest constitutes this selection process? We have had
> discussions about approaches where HumanML could provide input to
lower
> level technologies and middleware (EMOTE, HAnim, etc). Myself I have a
> very implementational approach to HumanML. HumanML, no matter how you
> described it, must present a use case where the primary focus is to
> render a noticable result back to the actors. If this will be a simple
> "DOH" or a complex "HMM", I really couldn't say.

HumanML will simply be a data/knowledge stream. I used to be trapped in
something like:

Human-A is a class of person, he currently holds a property
"GeneralMoodState" with a literal (Sean will start yelling now) value of
"90%" and I will probably do something like 

<pseudo type="ECMA">
Human-A.faceAppereance.showEmotion(respondTo(ToBoolean(GeneralMoodState)
));
</pseudo>

and display a smiley or something. But, we could do a lot more and try
to resolve more logical functionality for practical apps. Resembling
human nature into applications is one thing (that doesn't attract me
much), building upon human aspects as a form of data/knowledge is
another.

So in my future house, I will have an intelligent fridge connected to my
house server. The server knows I have a dog as:

<daml:Class rdf:ID="pluto">
  <rdfs:subClassOf rdf:resource="#pet"/>
  <rdfs:subClassOf rdf:resource="#dog"/>
  <!-- the user was stupid enough to provide the system with the dogs
preferences -->
  <humlDesc:likesToEat
rdf:resource="/supermarket/goods/forPets#catFood"/>
  <!-- but the system knows what Pluto  needs cause it is a subclass of
#dog -->
  <humlDesc:likesToEat
rdf:resource="/supermarket/goods/forPets#dogFood"/>
  <myRules:budgetPriority rdf:resource="/myRules/budget#highest"/>  
</daml:Class>

So, the system should just order the best kind of dog food available
(probably based on the latest medical data about Pluto!) without taking
cost under account. If you love dogs, this may be practical!


My point is, let's try to get away from rendering and towards
knowledge-centric (with the human as a context of course).

[anonymous resource ;-)] 
http://www.daml.org/2001/03/daml+oil-walkthru.html

Kindest regards,

Manos


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC