Rex,
Oops. I probably should have prefaced
this. You're right on all of your suppositions. I'm trying to get
HumanML into the discussion at the KT2002 conference, and this seemed a good
point to integrate both that and some of the Dynamic GUI work I've been thinking
about. This thread below was basically in reference to a paper submission, and I
thought it would be worth CCing the rest of you on it because of the HumanML
references. I apologize for any confusion.
The math is an attempt to create a formalized
description of an agent architecture, though I'm still trying to figure out the
best notation to use with it. I need to review the current notation used in
cellular automata for much of this, since my formal math training is in
continuous rather than discrete mathematics. The idea at the heart of it is that
the bots may actually all have different schemata, and this adds some complexity
that I think would be better served in a declarative rather than a procedural
environment ... I'm trying to create a distributed, XML-oriented environment,
with the role of the procedural code primarily acting as a facilitator for the
creation of XSLT transformations as defined actions.
-- Kurt
----- Original Message -----
Sent: Thursday, October 25, 2001 3:53
PM
Subject: [humanmarkup-comment] Re:
Adaptive Metaphoric Interface Design
Hi Kurt, et al,
It is difficult dropping into a thread in the middle somewhere. I get the
idea that this Adaptive Metaphoric Interface Design began on another list and
got copied here and this is what I glean, being on several lists that catch
this either tangentially or by an entirely different route, i.e Topic Maps,
Einstein Institute. I've had a couple of conversations with Kurt on the
fringe of ideas contained in this discussion-dialogue. And I seem to recall
that he's speaking or scheduled to at the KT2002 in Seattle in March, and I
think this may be in reference to that, or it may be in the context of a
different speaking engagement altogether, since it refers to HumanMarkup and I
was unaware of HumanMarkup being part of the Knowledge Technologies/Management
effort, although there is no reason it wouldn't be.
Of course, as someone whose primary interest is in X3D, H-Anim real-time
(or nearly as makes damn little difference) MPEG-4-adapted interactive
animated uses for HumanML, i tend to look at Knowledge Management as it is
applied to Topic Maps, which I see as the main access-route to and from
HumanML repositories for various kinds of personal and agent/bot data, as a
tool rather than an end-product. I did have an idea several years ago that a
Java3D-based Interface design for a Linux OS would be one way cool way to
organize the interface metaphor. However, nobody else seemed to think it was
coherent enough to consider.
So, unless I missed the start of this thread, and Eudora, when conducting
a find on the Subject, says I didn't, could you guys backtrack a bit and catch
this list up to speed so I can get a better picture of what this is actually
referring to. I don't mind the math notation, though I suspect this list isn't
especialy interested. I just happen to reviewing chaos and game theory at the
moment for AI purposes, or rather, for using elements of AI in humanoid
agent/bot programs, capable of being used in building scenarios that can run
on their own or interact with humans in nearly real time.
Cioa,
Rex
At 3:05 PM -0700 10/25/01, Kurt Cagle wrote:
?
> I think it
would be useful to have apresentation on HumanML. at the same > time
it should not be a "sales pitch" type thing. I would like to see the >
ideas of HumanML presented but I dont think it would be best if the
entire > presentation was a promotion for HumanML. Maybe up to half of
the > presentation being a description of what HumanML is and is
about/for. The > other half (if not more) would be about the Adaptive
Metaphoric Interface > (perhaps high level description of the
components or sections), and how > HumanML may be used to
modulate/control/influence the interface. Im not sure > talking about
a HumanML avatar, such as MS's Clippy (annoying paper clip) > would be
a good thing in that panel.
I'm inclined to
agree with you. Moreover, if I can get out of having to write more than one
paper on this whole thing I'd be eminently happy.
Ah, good old
Clippy. How such a brilliant idea could have been done so badly I'm still
trying to figure out. Of course, I've also thought that the biggest problem
with the focus that MS has on agents is that they tend to see agents as
looking out from the computer screen -- i.e., they represent the computer to
the user. Personally, I think this is backwards; the avatar should be the
computer representation of the person as presented to the computer, or to
the people at the other end of the pipe. Avatars to me should represent the
persona that the user imposes upon his or her view of the computer
universe.
> I would be
esp interested my self in seeing how things like HumanML can be > used
to capture/represent/repository things like intent/intention, >
context/situation.
I really think that
this is one of the more exciting aspects of HumanML itself. HumanML seems to
me a realistic step in recognizing that users are not passive entities but
have motivations that move beyond much of the old-school concepts of
computer interface design. One part of my own background is in game design
and programming, and the assumptions that seem so obvious to many game
designers very seldom make their way into the vast majority of programs that
most people use every day.
> Aside: in
my previous postings at KMCI I have posted code for SVG animation > of
de Bono diagram depicting a (SVG reified) spatial methaphor
representing > group activity towards a goal and how a
disturbance/disturber can > deflect/prevent goal fruition. In the XML
meta data book I wrote I used > topic map to provide semantics for a
circuit diagram. I will eventually find > the time to post in KMCI a
topicmap example which provides semantics for the > de Bono spatial
metaphor I just mentioned. What is interesting to me now , I > just
looked at some of the DAML code for HumanML, is that I can now
include > HumanML based schema as topic map subjectIdentifiers to
anchor the meaning > of such things as "negative behaviour" or
"negative affect" (intention) and > have that part of the topic map
"semantically explaining" the de Bono > diagram. For example , I can
use HumanML-DAML to express that the intention > behind the
diturbance/disturber actiion in the diagram is "bummer", "bad > guy",
and so on. I'm working my way through the meta data book now (in between
writing books and articles and trying to take care of an eight year old and
a toddler, so I'm taking the whole thing a few pages at a time. My research
time of late seems to be between 7:30 and 7:45 while sitting next to the
bathtub while my 20-month old daughter sees how much water she can splash on
daddy's book.)
One aspect of both
the presentation and my interest in HumanML is very much tied into exactly
what you described. I envision RDF triples that essentially look upon
behaviors as actions that follow from specific intent that map state models
to other state models within interfaces. I don't really define "good guy"
and bad guy, per se, but instead see actors as motivations, goals, and
actions that can be exemplified within an XML context, though I'm still
wrestling with the specific implementations (and the math behind it). As
near as I can tell, any given simulation can be represented as an
environment E consisting of state vectors e1,e2, ... en etc.,
where each vector is represented by an XML object (the reason I'm using the
apellation "vector" here is because any XML structure can be unrolled into a
k element linear vector, and it is useful to use this nomenclature for
discussion). The actors are actually a specialized set of environmental
state vectors (perhaps exemplified via HumanML) as well which I'll designate
as A = {a1,a2, ... am}. Thus the full state of the system could be
represented at any point as S = E U A. S is
also a cellular automata, since interactions occur as a result of
transformations Si+1 = TiSi where Ti is the set of
transformations {t1i,t2i,..,tmi, t(m+1)i,...,t(m+n)i}, the details of which
I'll get into momentarily.
The state of the
environment is not necessarily the view of the environment, which is where
dynamic interfaces come in.
There is a
secondary set of transformations V that define a view
Wi . The transformations are basically sensitive to the
schemas of the state vectors for handling the relevant transforms. If I
define an operation ¼() that returns the schematic type (whether RDF or XSD
isn't immediately germane) of a vector, then Wi =
V(¼(sk))Si
,k=1,2,...,m,m+1,..,m+n. It is
possible for V to also be a cellular automata, but I'm not
totally sure that it actually adds anything to the discussion.
Given this
framework, it's worth examining the nature of the transformations. The
transformations represent transition rules, and I'm positing that the
specific transformations are themselves schematically sensitive,
i.e., tk = tk(¼(S)). Like all
automata, the transformation for any given vector is both performed on the
aggregate set of vectors and is contextual: the transformed vector has a
privileged position relative to the transformation while all other vectors
have secondary effects upon the transformation. In XML nomenclature; the
XSLT transformation for a given vector (chosen based upon schema) uses the
vector as the primary XML source and then pulls in a listing of all
other vectors to pull them in for subordinate transformations that act on
the first. The difficult part comes in the fact that the subordinate
transformations are also schema dependent, and it's this point where I see
RDF coming in and where I'm still trying to get a good handle on the best
mechanism for making this work. The set of transformations would basically
be generated declaratively prior to the the CA kicking in (for the closed
system case), but would be a part of the automata in the open system case
where objects of various schemas enter and leave the system. That's why I
indicate the transformations as basically being given
as Ti rather that T.
This characteristic
is as significant in the view case, Wi =
V(¼(sk))Si , since the view of an object must be sensitive
to the existence of other objects; indeed, it may very well be worth
representing the view of the system as being a cellular automata that
changes the view state rather than one that changes the local state. This
translates into the requirement to develop a constraint system C
on V that provides boundary characteristics for objects. I'm still
thinking about this one. Moreover, it should be recognized that one facet of
such a system would be the existance of multiple view states V', V''
etc. that can be applied in the same manner; they would utilize the same
type vectors ¼(sk) but give very different views. Ideally, the bundle
V(¼(sk)) can be interchanged with V'(¼(sk)), etc. since they are
all dealing with the decoupled Si system.
Finally, I need to
take into account the fact that at least a few of the tki
transformations may include the existent of stochastic elements -- this is a
user interface, after all, and state changes in an actor ak may be due to
the person who's agent the actor is; in other words, the tki
transformation could just as easily be wrapped around external
stimulae.
Thus, there
are four primary problems that I am trying to solve here to move this
from concept to application:
1) Developing the
schema driven rules transformation mechanism.
2) Creating a
generalized constraint argument for C.
3) Building an
effective architecture for the manipulation of multiple stacks. This would
obviously be a fairly processor intensive system if run on a single machine,
though it actually becomes simpler when distributed.
4) Integrating an
interface mechanism for modifying the tki through user
interaction.
Sorry for dropping
into the math notation, but it just makes it simpler to conceptualize the
system in the abstract, and I suspect will be a significant part of the
formalism for the final paper. Comments are appreciated.
--
Kurt
--
Rex Brooks GeoAddress: 1361-A
Addison, Berkeley, CA, 94702 USA, Earth W3Address:
http://www.starbourne.com Email: rexb@starbourne.com
Tel: 510-849-2309
Fax: By
Request
|