topicmaps-comment message
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [Elist Home]
Subject: RE: [xtm-wg] Re: Inquiry Into Inquiry
- From: "Paul Stephen Prueitt" <bcngroup@erols.com>
- To: <xtm-wg@yahoogroups.com>
- Date: Thu, 26 Jul 2001 13:50:21 -0700
John Sowa states the
precepts of strong AI very well.
My questions, to him
and to the participants, are questions of relevance and
adequacy.
First
question. Is the separation of information into just two specific
(non-overlapping) categories, declarative and procedural, adequate to the
development of a computational paradigm that supports every day human individual
knowledge management?
I think that the
answer is no.
My answer does not
reflect JUST one person's opinion, but is something that can be grounded in
empirical evidence and structured reasoning (human reasoning). But this
discussion is **not allowed** (read **not-funded**) simply by the dominance of
the strong AI academic and scholarly disciplines. This dominance is not
based on results, I hold; but rather in denial of a clear and obvious
truth.
I also hold that the
results of AI do not accrue to the understanding of how to build knowledge
technologies for non-computer scientists. AI has become not relevant because it
is not adequate to the biological and social processes involved in knowledge
sharing. Computers for data storage and communication is of value.
This is not the claim. The claim is that AI is a false paradigm that has
long ago lost it's legitimacy.
John Sowa's note is
read in my mind as " well yes there is no empirical grounding of AI in any real
science. We in the AI community have long ago decided that the empirical
biological science means nothing to us."
Is this really the
position that is taken - or am I mistaken?
The **Value
Proposition** is that a real knowledge technology is possible, but only after
the strong AI paradigm is buried and a wooden stack put through it's
heart.
A second **Value
Proposition** exists for AI. This value proosition is IF AI is
properly understood as part of the Science of the Artificial (see Herbert
Simon's book), then its value to society will be enhanced. So in computer
security systems AI should have a high vlaue, higher than is recognized
today.
AI science is
defined by the AI scholars as being a inquiry into " What are the logical
foundations of learning and reasoning." This definition highlights the
issue exactly.
Second
Question. Where does the AI community obtain the right to ground its use
of the terms "learning" and "reasoning" . There is the terminology use AS
IF one is to imagine that this is of the nature of human learning processes and
human reasoning processes? The meaning of the term "learn" is
altered.
It is in a
scientific literature that has been walked away from by the scholars of that
discipline (cognitive neuroscience) and in a logic literature that the logicians
should have walked away from since the time of Godel and Church. The
grounding is in the history of Western philosophy, and this history has long ago
become mostly a mental exercise in determining how many Angels dance on the head
of a pin.
We MUST move
beyond. The old foundation is gone. A new foundation is
possible. (Why not?)
Will you help
us?
Paul Stephen Prueitt wrote:
> I ask
John Sowa to make a comment about the fact that Tulving has declared
>
his former views, regarding the distinction between semantic and
episodic
> memory, as being a distinction that has been found to be
lacking. Are we
> simply to ignore this history?
I never
regarded it as a crucial distinction upon which everything
else stands or
falls. In AI, the distinction between the definitional
networks and
the assertional networks has been a useful way of dividing
up the task and
organizing the various pieces of the puzzle.
And in fact, the approach
that I have been developing in recent years
can be interpreted in different
ways, some of which could be viewed
as supporting either
position.
> If science cannot walk away from an establish paradigm,
when evidence sets
> it aside, then why have a notion of falsification
at all?
There were never any claims that could be or have been
falsified.
The AI systems use both definitions and assertions. Any
particular
proposition that follows from the conjunction of both would
still
be derived whether the two kinds of information were stored
separately
or together.
There are several distinct fields:
1.
Neuropsychology: How do human and animal brains work?
2. AI
science: What are the logical foundations of
learning
and reasoning?
3. AI engineering:
How does one build intelligent machines that
learn and
reason effectively?
Each of these fields has had some influence on each
of the others,
but no particular result from any one of them necessarily
contradicts
any result of any of the others. For example, you may
have logical
theories that are inefficient in any neural or silicon
implementation.
Or you could find others that are very good candidates for
one, but not
the other.
John Sowa
To Post a message,
send it to: xtm-wg@yahooGroups.com
To Unsubscribe, send a
blank message to: xtm-wg-unsubscribe@yahooGroups.com
Your use
of Yahoo! Groups is subject to the Yahoo! Terms of Service.
To Post a message, send it to: xtm-wg@yahooGroups.com
To Unsubscribe, send a blank message to: xtm-wg-unsubscribe@yahooGroups.com
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [Elist Home]
Powered by eList eXpress LLC