| [Thread Prev]
| [Thread Next]
| [Date Next]
| [Thread Index]
| [Elist Home]
Subject: [humanmarkup] Notes on Requirements
- From: Rex Brooks <firstname.lastname@example.org>
- To: email@example.com, firstname.lastname@example.org
- Date: Sun, 10 Mar 2002 15:30:49 -0800
Title: Notes on Requirements
Notes on Requirements
In the course of working with the WSIA on their Requirements Gathering
Process, and comparing it to the workproducts of other TCs in OASIS,
which do not all include Requirements or Glossaries in their work, I
have come to several conclusions. Some of these conclusions I will
pass along to OASIS. I doubt my conclusions will meet with universal
acceptance or agreement, but I thought I would share them here
I will try to order these conclusions by importance.
1. HumanML and TC-Specific Element/Attribute Names and
Glossaries: One way to ensure that our distinct vocabularies do
not conflict with vocabularies that have different specific usages for
the same term is to take advantage of namespaces by adding the TC
and/or Human Markup Language Namespace acronym as a Hyphenated prefix
to the terms we use for Elements/Attributes.
Thus for HumanMarkup, our term for the emotion of sorrow would be:
huml-sorrow and would be included both in our schemata in a
consistent way, and also in the documents in our namespace,
specifically, in our glossary both as huml-sorrow and as
sorrow without any hyphenated prefix citing a public resource
namespace for a standard definition.
(I intended to cite the OED, but a quick search revealed that a yearly
license for this service is $795, and decided against it. I believe
they may have a problem maintaining their position as the definitive
resource for the English Language under these terms, but that is
This should make application development easier and less messy, and
make APIs easier to create in ways that will ensure interoperability
with other HumanML apps.
This is the cheap and easy way to ensure that our xml vocabularies do
not conflict with those of other languages, and to ensure that our
profiling information compilations are clearly demarcated for official
purposes. This means that we can establish explicit accountability in
identification, authorization and certification usages that employ our
schemata as accessed by our namespace and any repositories that claim
humanmarkup compliance in a way that NIST can easily verify.
2. For HumanMarkup we need to modularize our Requirements
Documents practices as well as our language schemata. This means
that we need separate requirements, use-case based where possible or
feasible, for each of our projected TC-developed schemata. When I say
Requirements documents practices, what I mean is how we collect,
document (verb), organize, winnow or narrow-down our focus, and write
Requirements Documents as blueprints for developing schemata.
2.1. We can help make the process of arriving at useful
Requirements by setting a Requirements Procedure for
Requirements Documents practices. I know this sounds redundant but
there is a difference between the two.
Having a procedure to follow, like a checklist, guarantees that
certain important elements are included in all of our Requirements
Documents by default when examining the target territories, such as
Human Physical Characteristics Descriptions for the Human Physical
Characteristics Description Markup Language.
However, having a procedure does not prevent inclusion of lesser known
aspects in a target territory that do not meet the first level
procedural criteria for consideration, such as, perhaps,
Non-Biological Virtual Human Physical Characteristics which might be
needed to distinguish clearly which apparently Human digital
representations in a given digital environment are actually agents,
rather than current interactive Human users, if that was deemed to be
a significant requirement.
(This is an actual consideration, which is why I mention it. I don't
have an answer because it hasn't been investigated yet, and it may not
need to be when we look at the HPCDML Requirements later. This is only
as an example.)
2.2 In addition to a Requirements Procedure checklist, we might
want to consider certain specific requirements that we may want or
need to specify concerning how HumanMarkup Data can be
accessed, by whom and when in terms of Transport Protocols
and Event Mechanisms. This is uncertain at present as to
whether it is going to become an issue or not. However, if we do it at
all, we should do it for all of our specifications.
My personal feeling is that, like almost all other TCs, ducking
this issue and the IPR issues around who owns the data (and
when ownership is transferred) that is going to be Looked Up
through UDDI or RDF or some Convention/Architectural
Style and then will be transmitted through SOAP-RPC or some
Transport Protocol Mechanism that interoperably enables
consistent Web Services is the only way to make progress in
getting practical work done, which will probably result in an
accumulation of best practices that eventually get codified...Somehow.
I don't like it, but it looks inevitable, and has the great virtue of
allowing us to feel some sense of achievement in the meanwhile.
However, since we are something of an anomoly in OASIS as is, we
really have nothing to lose by playing the foil to this collective
head-burying behavior. Just thought I would mention it. Feel free to
3. We should seek to declare the minimum number of Elements and
Attributes necessary. We should probably not get extensive in our
Requirements for the Basic Human Markup Language Schema, but we should
concentrate on making sure that it includes the necessary building
blocks to use in making the downstream schemata for the application
areas we have been reviewing.
This means that the important Elements of simpleType and complexType
for our application areas need to be included, and only those should
be included in the Basic Schema. Therefore we need to ask of each
Element and Attribute from application areas rather than from the
basic set represented in the Schema Toolkit (which may need to be
reduced for terms more appropriate to subsequent schemata).
In particular I am thinking that the physical descriptors we have
should be reduced such that any which can be derived as a subset can
be set aside for now. That overall area, except for those areas which
are congruent with the top level of existing systems such as VRML,
CAESER, Basic Law Enforcement, HR-XML, and Basic Medical Description,
ought to be left for the Human Physical Characteristics Description
Markup Language. I will attempt to clarify that in a few days at most,
as I intend to work up a first draft, straw man edit of our existing
HM.Requirements document with further explanations.
3.1. Our Basic Human Markup Language Schema should be thought
of as solely those elements and Attributes which are required for
minimum interoperability with existing systems and which can be used
to build our modular schemata.
So, one of our tasks in this first round of requirements is to define
what the next most important application area schemata will be, and
offer grouping terms for these application areas. For example,
Conflict Resolution Applications Schema, Diplomatic Negotiations
Applications Schema, Artifical Intelligence Agents Schema, etc. This
is the only set of Elements and Attributes that I think we need to add
to a basic set.
Those are the main issues that struck me as I have gone along
observing this process in more than one TC.
I will try to get the straw-man Requirements Document Update out by
Wednesday, and just a short reminder, however far along you may be, we
need to get these lists of ours submitted along with any
application-area scenarios which could be developed into formal
use-cases to support our first formally derived Requirements Document
by Friday, March 15, 2002, this coming Friday.
GeoAddress: 1361-A Addison, Berkeley, CA, 94702 USA, Earth
| [Thread Prev]
| [Thread Next]
| [Date Next]
| [Thread Index]
| [Elist Home]
Powered by eList eXpress LLC