Could XDI be
decidable?
Looking at it one
way XDI can't be decidable without resolving the issue of XRI being usable as
both a role and a concept.
This is true, if we
say XDI has n-ary predicates where n > 1, predicates that take 2 or more
arguments.
For example the $p
in $s/$p/$o is a binary predicate, i.e. $p($s,$o)
But...
.. Every XDI $
predicate is a form of some kind of $is$a expression or a negation of such an
expression
.. Every $is$a
negation is a $is$a of the complement of the object
.. Every $is$a
statement can be viewed as a monadic predicate for set membership of the object,
essentially the
object is the predicate
.. Every XDI non $
predicate for form $S/$P/$O is a monadic predicate expression of the form
$S$P($O)
.. Every one of the
monadic predicates above is a test for set membership, which matched XDI model
because we've said it is based on set membership
.. Essentially this
turns XDI into a monadic predicate calculus
.. As a result I
propose we can say XDI is decidable
Need a proof from me
and Giovanni and I need to get with Giovanni and discuss how difficult this
proof would be. There are existing proofs for a monadic predicate calculus being
decidable, so we would I think only need to prove that XDI was a monadic
predicate calculus and decidability would follow.
Why us decidability
important/useful? Because it's necessary to do reasoning that's guaranteed
to complete without hanging, and complete in a usable time
frame.
Why is semantic
reasoning useful?
In
short:
.. You can mine your
data for new knowledge much more efficiently and garner knowledge you couldn't
without reasoning
.. You can automate
data mediation from one data structure to another (e.g. PDX Person to FOAF
Person)
.. You can easily
add metadata to your data, or extend your data, without breaking or
rewriting existing data processing
.. Your search,
visualization, and reporting tools now can present you with the needles of
knowledge that you need, rather than haystacks of raw data you need to sift
through - saving you time, money, and manpower.
"I'll believe it
when I see it, who's actually using this semantic stuff now?", I've heard that
in many forms many times over the last year, but...
..
Netflix
..
Google
..
Twitter
.. US
Government
.. Best
Buy
From the third
article above, the ways semantic reasoning can be used:
Consistency - determine if the model is
consistent. For example, presents an OWL model containing the facts: (a) cows
are vegetarian, (b) sheep are animals, and (c) a ‘mad cow’ is one that has eaten sheep
brain. From these facts a computational reasoning engine can infer that ‘mad
cows’ are inconsistent since any cow eating sheep violates
(a). The following (incomplete, but informative) OWL snippets help illustrate some salient
issues. Informally, note the description of mad_cow (line 1). Note that mad_cow
is an intersection class (line 4) defined as any cow (line 5) that has a
property ‘eats’ (line 6) such that the range of that property (i.e. what it eats) is a part_of a sheep (lines
11, 13) and that part is a ‘brain’ (line 16). Below note that the ‘sheep’ class
is defined as subclass of ‘animal’ while ‘cow’ is a subclass of
‘vegetarian’.
Subsumption – infer knowledge structure, mostly
hierarchy; the notion of one artifact being more general than another. For
example, presents a model incorporating the notions (a) ‘drivers drive
vehicles’, (b) ‘bus drivers drive buses’, and (b) a bus is a vehicle, and
subsumption reasoning allows the inference that ‘bus drivers are drivers’ (since
‘vehicle’ is more general than ‘bus’).
Equivalence - determine if classes in the model
denote the same set of instances
Instantiation - determine if an individual is an
instance of a given Class. This is also known as ‘classification’ – that is,
determine the instance of a given Class.
Retrieval - determine the set of individuals that
instantiate a given Class