OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

topicmaps-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: [xtm-wg] an introduction to the BCNGroup beadgames



This is a long "entangled bead"

for explanation see:

http://www.ontologystream.com/area1/primarybeads/bead1.htm

**
Protege (out of Stanford) is still the best hope of all publically known
knowledge technologies (In Our Opinion).


The topic maps has the quality of being a OSI standard but I do not think
that it has been able to focus on the true issues of knowledge
representation.  The following is a dialog between Drs. Paul Prueitt and
Dick Ballard.  This dialog is complex... but may reveal many or most of the
issues that are presently limiting the application of computer science to
the problem of knowledge representation and knowledge asset management.

****

Dr Ballard said:

"<..Paul> As you might have surmised from my ontology paper, I have
a single uniform coding system for all concepts. In any given knowledge
base, absolutely everything (Content and Tool components) is characterized
by a (Concept Sub-Code) and (Instance Sub-Code) pair."

<Dr Prueitt>

Richard, I do not have first hand experience in testing your encoding
structure.  Setting up such a test is precisely what OntologyStream is doing
for now several clients.  (I mean we are testing knowledge technology.)

I wonder if the business processes around you might consider enabling
ontologyStream and the US Einstein Institute (U. Conn) to make an evaluation
of the Mark 3?

In this way, we may be able to better communicate your innovations and a few
other innovations into the marketplace.

***

You also said:

"Mk 3 assumes layers where each layer may be constructed by different
knowledge base developers, so across these layers there may exist wide
differences in the identifiers attached each given model and even to the
instance assignments to a given model."

<Dr Prueitt>

This architecture fits perfectly with the stratified notion that each level
of several (many) layers taxonomy (ontology) should have gaps where the
measurement process must occur.

In the Intrusion architecture that Don Tobin and I are talking about, we
have four levels { data compression dictionary, Intrusion Detection System
output, Incidents, Goals(of attacker)/policy(of definder).

I have a test collection from one of my clients that is from the ACID
database (Army CERT Incident Database) - where CERT is a regional computer
emergence response team.

I need to develop a middle layer to the Tobin/Prueitt anticipatory intrusion
detection architecture (AIDA).

***

You also said (smile - never known you to talk so much)

"The integration of knowledge across different developers and
different copyright holders becomes a matter of relating the particular
(Layer, Concept, Instance) codes across all knowledge contributors, where
their concepts do in fact overlap."

<Dr Prueitt>

Now this is very exciting.  The BCNGroup Chapter directs us to mining
scientific publications for emerging intellectual property and claim this
property for the innovators and then allow (a choice) about whether this
property may be assigned to the BCNGroup for community purposes.  The
harvest of IP from the science community might be done with the Mark3 AND
the AIDA (modified).  The Value Proposition is huge, and Laramie is working
on capitalization of a process that will get the BCNGroup Membership
started:


http://www.fourthwavegroup.com/bcn/universal_sandbox_project.htm

 The "universal sandBox" is the key as the sandBox allows beadgames to be
played --> protected small virtual group collaboration at the innovation
level (example: the Int_group is now working out the Tobin/Prueitt AIDA IP).
In these protected virtual collaborations the development of IP is occuring
very rapidly while at the same time both a branding language (for public
facing communications) and a IP disclosure (for patent applications) is
automatically produced.

***

and also Dr Ballard said:

"The task of integrating layers and finding these concept overlaps in ways
that are not obscured by word choice, obscure definitional distinctions,
etc. is of course the ultimate problem of semantics and our primary use of
"content ontologies" (Kipfer, et.al.)"

<Dr Prueitt>

Dr Fiona Citkin is (in my opinion) the strongest mind in regard to the
notions of terminology comparison science (a concept that was and four other
Russian developed in the 1980s).  She and her husband Dr Alex Cikin are both
BCNGroup Founders.  Their work started in the Soviet Union, but has now been
placed within the protection of the BCNGroup in the United States.  (I say
this simply because of the great regard we (the BCNGroup Founding Committee)
has for the Citkin's work.)

***

and also:

"Now I do not know whether our cross layer semantic alignments and
integration is an intrusion to you?

"Again with respect to "syntagmatic" units <a, r, b>. In what way are your
concepts (a, b) different from r, i.e. is "r" a concept too, are concepts
recursive to all orders of logic. As you may have seen in the ontology
paper, our Primitive Sub-Codes are the only intrinsic property that we use
to distinguish the fundamental conceptual differences enumerated by our 18
(Ballard -Sowa - Peirce) ontological primitives. Mark 3 allows for even
greater range, our work says that Peirce's "thirdness" (Sowa's "Mediating"
concepts) is not enough, our work with "paths" readily flows through and
integrates Mediating Concepts so we think that whatever your highest order
"logical structures", "Paths" will always be at least one greater."

<Dr Prueitt>

yep...  *smile*,  the r are all from a set of stuff that becomes inference
rules when aggregated into a situational logic.  They are **completely**
separated from the concept atoms. They play the role of a (Peircean?)
semantic valance.  Pospelov told me (personal communication - Moscow 1997)
that there where 117 types of semantic valance and that this was language
independent.  Some of the thought that leads to this (perhaps now lost
Soviet research) is in Pospelov's translated but unpublished 1984 book
"Situational Control".   The notions that Finn developed establish a formal
foundation to situational logics that are open to new axiom reification -
and this has largely been lost also (except:
http://www.bcngroup.org/area3/pprueitt/kmbook/Chapter9.htm)

***

and also:

"Now by far the most interesting question about <a, r, b> is: are the bundle
of relationships you talk about based upon "r" ranging over a variety of
different concept  types (Concept Sub-Codes) or is your bundle a set of
relationship instances of the same type. That is the $64,000 question, as
they used to say.

"The question of relationship instancing is absolutely critical. Relational
databases define relationships by r type and by endpoint "identifier keys"
like "a" and "b". This is the absolutely fatal flaw in Codd's work and the
sure death ultimately of databases. Conceptual modeling formalisms like UML
that presume already the physical modeling choice and environment insist
upon the "endpoint" labeling of relationships and forbid relationship
instancing."

<Dr Prueitt>

Richard, there is no question in my mind that the set of r is a class of
types that can be organized into a period table.  (Why not?)  This wonderful
(and common sense) concept seems to have most clearly have arisen in the
classified work of the Soviet semioticians.. but was completely missed by
the Army (Tom Reader's group) and was not mentioned by reports on same
written by Peircean school Robert Burch (Texas Tech.)  But the wonderful
concept is all over Pospelov's "Situational Control", and in his numerious
presentations at the Army conferences 1994 - 1997.  My private conversations
on this confirmed that this wonderful concept was THE CORE to applied
Russian semiotics.

The idiotic attachment to bi-nary relationships, and rule based systems, and
the problems related to this artificial concept are what keeps us, as a
society, from developing knowledge technology...  The topic maps community
seems not to be able to go beyond this problem.  As for the death of Codd
normal form... <long live the new King, King XML/> we agree that the Codd
normal form is a instance of the Rosen category error.  Where topic maps to
become useful (as personal knowledge managemetn technology), it has to
better accommodate their distinction between addressable subjects and non
addressable subjects as a first step.

***

and also:

" Once you accept the "instanced vector" nature of even the binary
representations, then the conceptual jump to the "instanced vector n-ary"
relationship is an easy step particularly for physicists who teach routinely
of abstract phase space dimensions approaching Avogadro's number 6.023E+26
(cgs). These are, for scientists and engineers, routine conceptual exercises
although for real world problems the dimensions for most considerations
shrinks to 1.0E+3 and ontologies toward 1.0E+6."

<Dr Prueitt>

This is the notion behind "structural holonomy" where the natural linkage
between tokens are settled down onto (link or n-gram analysis).  This notion
has been seen to be formally defined as a mathematical tensor object (placed
in memory in a Forth OS) by a colleague of mine and a new notion of
relational database developed:

http://www.ontologystream.com/OS/PMIM.htm

I mapped his notion to the notions of Karl Pribram in holonomy theory of
brain function (Brian and Perception, 1991 Erlbaum).  The similarity of
these concepts needs to be traced using the M-CAM IP technology

www.m-cam.com

(They do link analysis on the Patent and Trade Marks database,and have some
wonderful visualization of the nearness of IP in IP spaces.)

***

 and also (the next six paragraphs)

"If we try to use an indexed database, to store n-aries it takes N(N-1)
index
entries to quickly capture and retrieve any n-ary and that complexity, for
real world problems, explodes to "non-computability". Hence, OLAP and any
number of retrieval solutions including mine. So why am I not seeing heavy
use of n-ary models?

"The n-ary vector is the natural representation of anything "conditional".
The larger the N the greater the number of conditions required to be
satisfied. One could choose aircraft design for example. List all the
different models, all the different configurations of airframe and engine,
all the different payload requirements, fuel capacities, all the performance
variables, altitude, maximum speed, range, etc. Each aircraft would be a
single vector instance and all the concept (instances) on that vector would
represent the particular design choices they made and the resulting
performance that they achieved.

"Some instances (top speed), (engine type) might be the same. Those concept
instances would show up on both vectors, but other choices and performances
would make the n-ary instances different overall. It is not important that
the number or order of concepts match up. They will be naturally different,
two airplanes may use completely different systems or technologies to
produce the same operational function.

"In the end this set of (r type = aircraft design - performance) vectors
describes all known aircraft design performance possibilities. Now the
reason that a particular aircraft engine produces a given top speed is not
explained by this vector. That relationship is explained deeper in the
knowledge base at finer levels of conceptual granularity. What the n-ary
focuses us on is the possible decisions and their outcomes. To engineers,
this is the "trade space" where on the basis of existing designs and
technology, the constraints add up to require specific choices and
possibilities. They can see in moments the experimental consequence of
trading one choice off against another while focusing on the most demanding
performance requirements.

"Obviously designers may hope that a different design or technology might
let
them do something new. Knowledge of the trade space is one of the key
elements of expertise and knowledge superiority, the size of the base and
the speed in using it. In knowledge management, knowledge of trade space is
among the first to be lost (requirements is another)."

*

<Dr Prueitt>

yep.  and this is the basis for knowledge assets management.  It is very
simple.

***

(and also: the next six paragraphs.)


"Now we come to the issue of procedure versus declarative knowledge. It
takes
years of engineering, and billions of dollars in labor and computing to
produce one such vector result. What if at night someone had explored that
design space by running all the other reasonable design choices through the
simulation or performance estimating software and saved all those other
vectors. We call this "sampling the design space" or "learning". It can
happen once long before we need it again.

"Years later the procedure and those who know enough to make it run may or
may not be around. Today it takes up to 14 years to produce a new airplane,
so the odds of exploring that space later is pretty remote.

"The basic problem with most design problems is that design is a one way
process, propose a design, then spend years testing and perfecting it.
Process and procedure is too often a non-reversible activity.

"By contrast, the customer does not want design. What they are looking for
is
"something", maybe anything, with a given performance. Stored as a
"declarative vector" the aircraft design - performance space has no
preferred direction. It is just as easy to start with performance and see
all the designs that come closest to that required and all the other,
incidental characteristics. Now what is the computing cost for examining the
stored declarative trade space. By knowledge theory at worst it should be
proportional to information content and virtually instantaneous.

"What would be the cost instead of browsing that space with the design
(programs) procedure? It should take less than 14 years, if we have anything
left to start with.

"The declarative knowledge representation is about learning once and never
forgetting. The procedural approach is about never learning and always
starting over. The procedural approach makes sense in an information rich
environment where things are never the same or where the cost of remembering
is the greater burden. The declarative approach makes sense in a meta rich
world were knowledge is stored in n-aries and put in places that never die."

*******

<Bead master/>

Thank you, Dr Prueitt and Ballard, for the presentation of concepts.  These
have been forwarded into the beadgames (protected sandboxes), beadgame and
Int_group for further discussion.  Anyone wishing to join these two games
may apply by sending a message to me at

beadmaster@ontologystream.com

With respects and appologies to everyone.









------------------------ Yahoo! Groups Sponsor ---------------------~-->
Small business owners...
Tell us what you think! http://promo2.yahoo.com/sbin/Yahoo!_BusinessNewsletter/survey.cgi
http://us.click.yahoo.com/vO1FAB/txzCAA/ySSFAA/2U_rlB/TM
---------------------------------------------------------------------~->

To Post a message, send it to:   xtm-wg@yahooGroups.com

To Unsubscribe, send a blank message to: xtm-wg-unsubscribe@yahooGroups.com 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC