humanmarkup-comment message
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [Elist Home]
Subject: [humanmarkup-comment] Introduction for a Semiotic Application Proposal
- From: Rex Brooks <rexb@starbourne.com>
- To: humanmarkup-comment@lists.oasis-open.org
- Date: Tue, 09 Jul 2002 10:19:25 -0700
Title: Introduction for a Semiotic Application
Proposal
(Note: Except for Len's response to
the introduction, I wrote this before I got the Wolfram book, and I
had a telecon with a Web Services subcommittee this morning, so I
finished this up, too at the same time, before jumping back into
devouring Wolfram.)
Hi Everyone,
I thought I would provide an introduction to a new activity that Len
has suggested, and which I support to a large extent. This is an
experiment, to use his explanation of it to me in reply to my query as
to whether he had reached a conclusion in his study of the conceptual
framework of stratified complexity as put forward by Dr. Paul Prueitt,
Ph.D. The experiment is to create an application, which to me, sounds
like a semiotic communication application. The main point is to create
a small footprint processor which uses strict definitions and
interpretations of the semiotic concepts of sign, signal and symbol.
In Len's words:
I want to propose a simple application language for us to
explore based purely on sign theory. This is just an
experiment to see if an 80/20 sweet spot can be hit with a
simple language that could then be augmented with namespaces.
In other words, what would it mean to leave notions of
inheritance behind and deal strictly with aggregation?
len
I passed this introduction by Len, and he
responded:
There's nothing
wrong with that. I want to explore a sign markup design
because
it might have
immediate utility. Deriving from abstractions has a way of
making people
think we are
agreeing when we may not be. A sign markup focuses or
clarifies the
semiotic aspects but
doesn't force us to enumerate codes. It should just be
a
very simple DTD or
schema for classifying signs according to the traditional
semiotic
types. The utility of that would be the ability to take a
system of
interpreters and
assemble them according to their sign capabilities rather
than their sensory
capabilities.
This evolved out of the most recent
discussions of the elements in our strawman base schema which began
exploring processing of information as part of the Human Markup
Language itself, as opposed to adhering to a language model consisting
of solely a vocabulary.
However, while I support conducting this experiment, I do not believe
that we need to do this to the exclusion of continuing to work on our
Primary Base Schema. If the experiment proves out, and a simple
application language can fill our needs, so be it. However...
My own opinion is that a semiotic processor is needed, and it makes
sense for us to develop it, since it will improve the delivery of
Human-Centric information both in terms of fidelity and in terms of
clarity.
This is not related to my call for a subcommittee to study the need
for a high-level ontological framework, but it could work with such
framework since it seems to focus on the processing rather than the
theory, unless I am reading this wrongly.
Yet, a word of caution is needed here at the beginning of this
effort.
The caution is that we must remain open to the distinct possibility
that this effort may well spin-off into its own technical committee,
and, in fact, I want us to be ready to spin that effort off sooner
rather than later because it is an effort rife with areas that are
certain to engender conflicting opinions, and I don't want our effort,
which has been characterized by the most unchaotic and unconflicted
progress of any such group in my experience, to be fragmented as a
result of this effort.
I am passing this by Len's eyes prior to posting because I want him to
correct any misstatements which I make as I express some further
introductory statements concerning this.
There are several aspects to this experiment which I am interested in
refining.
One, since I have already expressed a preference for the DAML-OIL
upper level ontology as a foundation, I am hoping that our experiment
shows that it is a good idea to have an upper level ontology.
However, it is the worst possible methodology to have a predefined
observation or conclusion you seek to validate. Please refer to
HM.frameworks for what I mean by ontology.
Two, a marriage of autopoiesis:
http://www.cs.ucl.ac.uk/staff/t.quick/autopoiesis.html#observe
and semiotics:
http://www.aber.ac.uk/media/Documents/S4B/sem01.html
will actually have a much more widespread applicability than
HumanMarkup, so our work may prove seminal to another effort, or
spin-off into its own activity. It is particularly useful for
organizing, accessing, and using data systems.
My thinking is more aligned with complex adaptive systems than
stratified complexity, and my personal viewpoint falls short of
accepting knowledge theory as a system that exhibits the attributes of
cellular automata, which is what autopoesis, in confirming the
necessity of "situatedness" of observer/observed posits. I
really just want my biases understood in saying this.
If what I have said makes no sense to you, I suggest you have a lot of
studying ahead of you. The terminology of this field sounds much more
complex than the ideas themselves actually are. In essence,
autopoiesis says that cognitive units (you, me, artificial agents) are
units that are themselves part of the processes and made up of the
same stuff as the processes which they observe. (This is the basic
significance of "situatedness" and boils down to a conscious
acceptance of and allowance for, uncertainty based on an observer's
effect on the "observed." Think of the observer as the
observer's own blind spot.)
Because cognitive "entities" are part of what they observe,
they exhibit the attributes of cellular automata, and the entire field
gets a bit more complex from that point. However, what it boils down
to for us in our attempt to develop a semiotic processor, is to make
clear what constitutes a sign, to what system, observer or observed,
or combination thereof, it belongs, what constitutes a signal and how
can we agree on what a signal means, i.e. what "message" is
being sent, and what constitutes a symbol and what does a symbol stand
in place of, i.e. what is the level of abstaction that the symbol
characterizes. Lastly, can we construct a self-consistent system of
clear signs, signals and symbols that operates to improve
communication?
Again in Len's own words:
"(Reference to previous correspondence)... it isn't a bad model
for understanding why multiple
observers get/have a different story.
See
http://www.csu.edu.au/ci/vol03/paper4/paper4.html
for a better explanation. This in many ways is just behavioral
cybernetics from the perspective
of biology. In effect, emotions are not discrete as much
as they are a sum of various internal
systems creating an effect across the system. That is pretty
much what we said to begin with.
I've been harping from time to time on the notion of
'observables' and these notions
work with that. I want to focus on sign systems because if
our domain is human
communications, our task is to model the gestures of that,
first, and then only
by way of providing possible interpretations, does the rest of
the human modeling become
useful.
len
Hmmmn.
Ciao,
Rex
--
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [Elist Home]
Powered by eList eXpress LLC