OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

humanmarkup-comment message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Long and Not Much to the Point: Re: HM.applications-Profiling-Level ofDetails/Abstraction


> Public safety systems already have all of that
> information.  If it is abused, that is already a
> problem.   But it is much worse if that is not in a
> standard form because then when it can be used for
> the right thing, it is much harder.

This is one of those domains where the ethics of the situation becomes very
unclear, something that seems to happen within increasing frequency the more
interconnected we become. At what point does surveillance for the public
good become surveillance for public control?

One concept that I've found more and more relevant is the notion of the
transparency of information interfaces. In an typical electrical circuit,
the more you increase the amperage, the energy, in a circuit, the more heat
that you generate, because of resistance. A good electrical engineer knows
how to use that resistance, or its magnetic analog impedence,
constructively. However, one consequence of either resistance or impedence
is that it also tends to damp circuits and keep them from "leaking" into
other circuits. If you create the same circuit but use superconducting
materials you get more efficient systems, but you also get more transparent
ones -- one circuit's magnetic fields will interact far more strongly with
others, and the relatively linear effects that you see go non-linear.

XML is in many respects a very powerful superconductor. It's first order
effects (as manifest in HTML, which for purposes of description can be
thought of as a first pass XML system) was to make application content
transparent -- the web works because large swathes of people can write HTML
easily, because even with variations in browsers the HTML that I write you
can read regardless of operating system or platform, and because it contains
the core features that describe many applications out there. The second
order effects are really beginning to come into play now -- the transparency
of application development, the transparency of user interfaces, the
transparency of data transport standards. These transparencies are making it
possible to turn devices with distinct semantics and syntaxes into abstract
devices with common semantics and a single syntax. These second order
effects solve another very real problem: moving the descriptions of objects
(invoices, auto-parts, people) from a fairly fruitless search for finding
universal descriptors for all objects to recognizing that an object's
definition -- it's ontology -- is ultimately local. By localizing
variations, you make data interchange far more transparent, which was
ultimately the goal of such initiatives as B2B.

Each time you turn an opaque system transparent, you induce a phase shift -
the system doesn't just become a more extreme version of what it was
previously, it tends to (sometimes catastrophically) rearrange the very
nature of the system. We're still going through the second order phase
shifts but the third phase shift is going to go pretty quickly. It's when
you start moving the transparency further up the pipe, so that it begins to
impact upon the societal structures. HumanML is a third order phase-shift of
XML. We are in essence attempting to codify the human/machine interface
here, either from the standpoint of helping a process (the application)
provide some kind of graphical interface/avatar to represent itself to the
person or from the standpoint of modelling the human interactions for use by
the processor.

I'm raising this point not to rant, but rather to point out that because it
is a third order phase-shift, HumanML will have a very real impact upon
societal structures - laws, markets, education, work, entertainment. The
transparencies introduced here (and by most such technologies) mean that the
normal restraining effects that are induced by opacity in the system are
lost. I'm not by and large that concerned about governmental intrusion
(though I have to admit being very alarmed by what I'm seeing done lately)
but I do worry that we need to balance the needs of transparency against the
needs of privacy.

* To be human means having the ability to manipulate symbols, but to be
human also means the we are very manipulated by symbols in turn.*

> A second problem is knowing if the driver
> is the owner.  And so it goes until enough
> facts are established to get a probable certainty
> which really means "legal certainty".  Consider that
> your city may already be using photographs to catch
> you running a red light.  Because we are "accuser
> must prove" society, your lawyer gets you out of that
> by inferring (doesn't have to prove) you weren't
> the driver.  Comes down to the judge.

For now. Circumstantial evidence is a funny thing. There is a very common
misconception in our society that a case which has only circumstantial
evidence is one that can be thrown out. In fact, most cases ultimately rely
on circumstantial evidence rather than on anecdotal -- eyewitness --
evidence, because it is far easier to fool people than not. A judge today
may rule that photographic evidence in this case is not admissible, because
there are several factors that can make such identifications questionable.
However, that veracity is becoming easier to prove, and the arguments
against it often have more to do with situations that have very little
relevance to the crime. A judge a few days ago dismissed a traffic-light
runners case with more than 200 defendants, not because of questions about
the veracity of the evidence, but because the company that made the camera
systems that caught them received a kick-back for every case brought. In
other words, it came down to the ethical sense of the judge rather than the
circumstantial evidence; a different judge may very well have ruled
differently.

I guess what I'm saying here is that technology is making the criminal
enforcement system very efficient, but not necessarily any more fair,
because it is rendering the concept of "reasonable doubt" moot. The English
judicial system that ours is based upon hinges upon that concept, yet the
danger here is that transparency does not ensure truthfulness, only
accuracy. It is perfectly possible to prove beyond a shadow of a doubt that
the wrong person committed a crime.

Okay, I'm way off topic here.

> That is the right thing.  The wrong thing is to live in a
> state where if an owner does not respond quick enough, the tow and impound
> agency has the right to sell the car to recover costs.  Since
> the car has higher value than the tow/impound fee, they use
> all means possible to delay sending a notification.  So would
> you rather the tow/impound or the police to be responsible for
> the notification?  They have identical information.  (That is
> not made up.  It is a real condition in a real state.)

> The fact is, data is being collected about you by every
> means possible and by multiple agencies.  They don't pool
> it today.  In the very near future, they will and are.
> That is the dilemma of the web.  Shall we disconnect?
> Or shall we try to make sure we can at least follow
> the trail as best as we can?   The SW is a nightmare
> and that is why I wrote the Golem paper, to point
> out that the SW implementors have responsibilities.
> It is one thing to have the ontologies; it is quite
> another to assert that they are true.  The best we
> can do is constrain their authority.

I agree with you on that point, something that has bothered me both about
TBL's Semantic Web in the first place and Web Services, which combine some
aspects of SW with a centralized programming model. Individuals really have
no use for the SW, and in many respects it is inimicable to them.
Correlation of data transparently means that is is possible to build
associations that may not have existed in isolation. Transparency of data
access means that the number of databases that can communicate with one
another grow exponentially.  I see this with UDDI, I see it with Hailstorm
and Passport. I recognize full well that most of us already have extensive
public dossiers, but the one saving grace in all of this is that there are
currently barriers due to semi-opaque semantic/ontological barriers that can
effectively only be handled by human intervention. I worry that as we create
HumanML standards, we tear down those barriers, make the larger system
extremely transparent to those parties that definitely do not have our best
interests at heart.

Unfortunately, I'm not sure I see a solution here. We need to be cognizant
of the issues, which is a big part of the point I'm circling (very
circuitously) around. We need to recognize that what may be good for a
business or government may not be good for the society as a whole. Call me
the civil libertarian of the group, but I fear sometimes that if we do not
recognize the social consequences of our actions as designers, that we may
end up bringing on a future that none of us would want.

> The idea of the scenario real or imagined is to explore the
> domains and see which of our current applications could be
> applied.  A simpler scenario with multiple interfaces to
> other systems is demonstrative.  We can caveat it out of
> relevance but that won't help us.
>
> Because of stereotyping, HumanML is about more than identity.
> Bugs Bunny can have multiple instances.  Kurt Cagle can't.
> There can be many persons named Kurt Cagle and that is why
> identification is a process, not a name.  In PS systems,
> we always assume the person giving the name is lying because
> they often are.   A bank does too but not as much.  A grocery
> check out counter does too but not as much.  The question
> of how much we are willing to endure to achieve security is
> perennial in open societies.  I don't have an answer for that.
> I also don't have any credit cards. (True.)

Wise man.

I am not arguing that identity and authentication are two very different
things. This is a natural consequence of the fact that we are creating
virtual or semantic models to represent real world objects. Real world
objects have uniqueness as a central characteristic. Virtual models do not
have uniqueness, but can only simulate uniqueness to some arbitrary level.
This is ultimately why no encryption mechanism will ever be even
theoretically perfect (I've not yet bought into the notion of quantum
computer encryption mechanisms, because to me no real-world system is ever
fully decoupled, which is one of the central tenets of quantum programming).


> Oddly enough, because he is trademarked, Bugs is always Bugs.
> For some of us, the fun application is enabling artificial
> personalities.  I mix that into the scenario because it is
> the one some of us like and is reasonably straightforward
> to apply without invoking all the paranoias HumanML is
> certain to invoke, but really, over conditions that
> already exist.  Big Brother was already in place by 1945.
> I am more afraid of Big Blabber.  So, Bugs it is.

Bugs Bunny is unique only as a legal entity, which I think raises a critical
issue. There is very much a distinction between a "real" person and his or
her legal entity. A legal entity is a model, albeit one that in theory
should have a key that defines it as being "unique". However, in practice,
that uniqueness is fairly arbitrary, and typically is not in fact due to
obvious representational characteristics. If I draw a picture of a long
limbed, gray and white anthropomorphic rabbit with long ears chewing on a
carrot as if it was a cigar, have I created Bugs Bunny? Is the legality due
to the fact that the person who does render this character is an employee or
a contractor to Warner Brothers? I guess what I'm saying here is that such
authority to define a legal entity is a function of the state, just as
issuing a drivers license or money is a function of the state. The state
issues a drivers license in theory as a measure of (minimal) competence, but
in fact it's purpose is at least in part to create a legal entity called a
driver that can be mapped to a physical person.

Thus one of the key distinctions that needs to be enunciated in any HumanML
document is that a legal entity has an authority granting it "uniqueness",
whereas a non-legal entity does not. To get back to the driver model here
for just a second, if I create an HumanML avatar to represent the driver of
a specific car, should the model be such that the avatar is perforce a legal
entity as well? What is more important to the car -- that the authorized
people are allowed to drive the car, or that the car can configure itself to
a virtual dummy, a non-legal avatar, that retains characteristics but not
legal identity. I think this is where I was going with the scenarios
earlier, though I'm still articulating the concept even to myself.

Put it another way -- should HumanML include, either explicitly in its core
or implicitly via extension, a mechanism for creating legal identity?

Okay, this was way overlong (and I've spent far too much time writing this
when I should have been doing more productive work) but I think that it is
an issue. Comments?

-- Kurt Cagle





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC