[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Subject: RE: [huml-comment] RE: Apartment Clerk Informal Scenario
From: Dennis E. Hamilton [mailto:dennis.hamilton@acm.org] >My first thoughts are to layer things, as you put out. >1. Take this example (even though it seems far-fetched in the present) into >a future "likely story." Assume that there is some sort of appliance (maybe >even an ear piece) used by the clerk. Simplify the idea of the technology >and consider that an electronic advisor is actually used. One reason to annotate information is so that it can be reused in multiple contexts including machines and systems which we have yet to create or envision. It is precisely this lifecycle aspect that is the prime reason to use markup. It is the notation that can determine such. For example, an electronic advisor given an ear piece might be using VoiceXML or even MPEG files. But the metadata that enables a selector to choose the right file to present would be HumanML. >2. Look at the layers between this likely-story situation and the nature of >the application and where the database or knowledge base that is drawn on >comes into it. HumanML-derived languages are likely in this scenario to be annotation languages. They describe the instance at hand in terms of that metadata. Consider the woodland troll. We know it is a troll; what do we need to know to figure out it is a woodland troll, first, then what information is stored that is generic to all woodland trolls as opposed to say, mountain trolls. Such information may include artifacts of dress, habits or gestural types, different haptic values used for intimate vs business communication, and so on. If a woodland troll meets a mountain troll and wants to woo her, it can be awfully useful to find out about food allergies of mountain trolls, family relationships (eg, are mountain trolls paternal or maternal: figure out which sex inherits wealth and which parent to ask for her green hand in marriage). >3. Look for what is essential about HML being used down there, and how is >it used and by whom. HumanML in the primary schema, is just a set of categories of information types that influence human communication. We can apply these to humans directly, and by analogy, to trolls. So it is just metadata. How and who uses it can vary greatly. But let's say anthropologists keep a database on trolls the same way the US CIA keeps online factbases about current nations. If the data has HumanML annotations, a program could conceivably do such things as create a prototypical troll, then a woodland troll, or even a woodland troll from Hyborgea in the 8th Age of Zork without having to do all that research for themselves simply by querying the public database. I'm making this up, but the point is, the HumanML categories provide contexts for specializing the human. Again, it works as well as the category values collected are correct. The HumanML categories themselves, what is in the primary, are a "theory". We are pounding on them to see if we have consensus that the "theory" itself is what is needed. They should accord well with semiotics theories in the broad sense. Let's take another example. I am a marketeer for an European company that wants to create a market campaign for the American South. Where can I go to get information that will help me select the right texts, images, dress styles, and even current event controversies that will provide a positive image in terms of the target market's culture? Further, can I tune that even more specifically to a smaller region? Can I avoid stereotypes when I do this? Can I usefully apply stereotypes without being offensive? Tricky stuff, but having annotated samples can help one choose. Consider the problem of the western observer and the Indian males who hold hands. Is that a gesture I want to use in Bombay, or is it only a habit of say, smaller villages but not practiced in the larger cities, or does is vary between northern and southern india? A HumanML database that has geotemporal attributes helps. Think of all of the movies you've seen where an alien lands on our planet dressed in clothing and speaking in the styles of the 1950s because their sources, TV broadcasts, have the speed of light time delays. If you go to Nebraska, a Canadian looney is not legal tender, but if you are in say, Maine, a vendor may accept it because you are close to the boundary. (Boundary conditions are especially interesting.) >That is, we are navigating some layers of abstraction to get to how HML at >(3) [what is in some sense the likely direct use, as a plausible story] is >"worked" to contribute to the situation at (1) and exactly what is the gain. >One key question I then have is what is the point and purpose and value of >interoperability at that level? It is as valuable as the theory is descriptive of what is actually observed, in other words, ontologies are theories about things. One commits to the ontology (agrees to its semantics) and applies these. If it proves to be too expensive, too fuzzy, or just plain wrong, it's time to get a new theory. But without HumanML, as currently conceived, we do not have a stake in the ground, a hypothesis to test. This is true of XML Schemas in general. They are theories about documents. len
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Powered by eList eXpress LLC