[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Comments on submitted requirements presentations.
Hi SET TC, 1. A public chat room that can be started easily and allows
for note taking for TCs is found at http://webconf.soaphub.org/conf/room/set [ the room is dynamically created from the value in the path
following “room” ] It is a convenient way to exchange URLs to others in the
group for web accessible information relevant to a discussion. 2. Is there a web browser page that allows access to the
presentations that are made? I worked from home on the last call but still NAT
seemed to prevent access. I tinkered with some ports but had no success.
Tomorrow I must be at work during the call and there is a nasty HTTP proxy and
few, if any, ways to open up TCP or UDP ports. If time does not permit, perhaps responses could be posted
to this mailing list regarding the questions posed in the comments. Thanks Dale Moberg ENEA experience in message based interoperability Tools
for semi-automatic mapping and support to its
documentation X-Lab
has setup a tool to automatically generate a UBL 2.0 ontology to
semantically describes processes, activities and UBL documents in terms
of structural components and their relationships Open
issue: a well accepted language to define mappings and related
rules. <Comment> The stated issue is unclear to me: Are the mappings of interest between
two variants of UBL documents, such as a document A1 in UBL-SBS and A2 in
UBL-NES? Or between UBL documents and, say, OAGIS BODs or GS1 XML? I am aware of limited standards applicable to the topic of business
document translation. One OMG initiative is QVT; see http://en.wikipedia.org/wiki/QVT
which begins: Model transformation is the process of
converting a model Ma conforming to metamodel MMa into a model Mb conforming to
metamodel MMb. If MMa=MMb, then the transformation is endogenous, otherwise it is an exogenous transformation. Model
transformation is a critical component of model-driven architectures (MDA).
Recognizing this, a Request for proposal (RFP) has been issued by OMG on MOF
Query/View/Transformation to seek a standard compatible with the MDA
recommendation suite (UML, MOF, OCL, etc.). But should a business document be thought of as a M1 level model? Would
the M2 level be an xsd schema, for example? OCL might be compatible with “description logic” semantics such as
those expressed in flavors of OWL. But CCTS and CCMA contain various UML/MOF style M2 models of documents
also. I know the convener has expressed a preference for technologies, but
some philosophical preliminaries might help some of us contribute. I can see
the value of formulating constraints that translations from one BIE to another
should satisfy. I am not certain that those constraints would result in a
constructive procedure producing a translation, which is probably part of what
the “map” requirement involves. I think the the constructive procedure
sense of map is what would have the most value to ETL and similar tooling used
currently in business collaboration software environments. In general, both OMG and W3C (and probably others) are producing
standards relevant to SET. I think I need to see how our deliverables differ,
enhance, or otherwise relate to what currently is available. (The Wiki has a
list of mainly Eclipse style tools implementing portions of QVT, for example). </Comment> A Proposal for SET TC Requirements The
semantics to be defined should serve the intended
purpose – Not any semantics the document component
may have
but the minimum amount of semantics aimed to facilitate
the discovery, reuse and translation ● The extracted semantics should be
expressable in any ontology
language ● But the ontology language should be OWL
since there is a lot of tools that can be used <Comment> I like the idea of relating the knowledge
(semantics) to be represented in terms of the basic computational tasks that we
are intending to support. Discovery is constrained, in part, by the
search query (contrast google key word search versus UDDI tmodel search) as
well as the problem situation. The dominant way for collaboration communities
to find out what they need is by asking others what they use or can support or
want to work on in an industry standardization effort! I think we should put discovery at a
lower priority in our work. Reuse: what are the things that are to be
reused? What business or collaboration usages or tasks are relevant? Need
a little more expansion here to understand how reuse is useful as a constraint
on the alternative approaches to semantics. Translation: Are we trying to create a
library of reusable maps? Or of generators of maps? Or map
checkers/testers/validaters? Etc. Again, the translation task needs analytical
refinements to more specialized subtasks to provide useful constraints IMO. </Comment> How Semantic
Interoperability using OASIS SET TC will improve Collaboration of eGovernment
Applications Why (semantic) interoperability is an
issue But: G2C, G2I, G2B (and vice versa) and G2G on
different federal levels Different federal levels Country Community other countries, other hierarchy levels Lots of different systems Lots of interfaces Lots of entities which are “similar” but not the same <Comment> Is interoperability as a problem here an
“optimization” problem of a “satisficing” (good enough for government work, as
we say in the </Comment> HUT SoBerIT Requirements Proposals OASIS
SET TC Different wellbeing service providers and systems How to achieve semantic interoperability between wellbeing service
providers? Collaboration between wellbeing service providers and citizens How to map concepts and semantics between professionals and citizens? How to use citizen created vocabularies ("folksonomies") for
citizens and professionals <Comment> I think that folksonomies is an interesting kind of theoretical area. I
am wondering whether that sort of cognitive/ natural language interface area is
the place to start, however. Even with the somewhat “technical” and circumscribed language of
business transaction request and response used in EDI and automated business
interactions, we are still at the edge of what “might be” ready for
standardization! I really think we need to remember that a standards effort
aiming at some engineering utility needs to be well out of the research
academic environment and ready for the lower level textbooks. </Comment> Providing Semantic Support for CCTS Context Domains What standard to use for OHW LifeManager document repository? How to support dual model? How to manage different ontologies? Providing Semantic Support for Customization of Core Components and
Business Document Schemas Document Schemas for LifeManager documents, Business Documents for
citizens and professionals Providing Semantic Support for Document Translation Translations: B2C, C2B, B2B, B2C2B <Comment> Interesting but I hope we can try to get a solid technical basis going
first and then go for these more elusive areas. </Comment> SET TC Initial Presentation Providing Semantic
Support for CCTS Context Domains Providing Semantic
Support for Customization of Core Components and Business
Document Schemas Providing Semantic
Support for Document Translation <Comment> Is Reuse as mentioned earlier connected in your view
with Customization? Context Domains: I see that there might be a combinatorial
issue: q 65,610,000,000,000,000,000 q Sixty
five quintillion, six hundred ten quadrillion (US) q Sixty
five trillion, six hundred ten billiard ( ( 300 mutually exclusive codes for each of the eight categories
(300^8.)?? However, I hope that we are not posing the problem in such a way that
we have to iterate over each possible permutation. In terms of problem solving
tasks, what problem(s) involving CCTS context domains needs semantic support,
and how will that representation of semantic knowledge help solve the
problem(s). Last support issue also needs explication. </Comment> A Requirements Proposal There are also variations in context scheme (CCTS,
UCM,etc) D1. Namespaces (may vary or may be the same) D2. Models D3. Core Components D4. Core Components Harmonization Group (private,
TBG17,organization, etc) D5. Underlying syntax (XML, ASN.1, EDI, etc) D6. Variations in basic datatypes (and codelists) D7. Naming and Design Rules (UBL, ATG2, etc) D8. Context / Purpose (D8.1, D8.2, etc) D9. Context Scheme Concern with OWL supporting knowledge base maintenance
(dolphin fish gill example) <Comment> I like this inventory of the source of variabilities,
and it helps me get closer to thinking in terms of encoders and decoders,
automagically produced. However, Could you explain what a harmonization group’s goals
and results look like a bit more? How does it impact producing Models, for
example? How does it relate to variations in D7? or D6? Isn’t harmonization
just some approach to relating variations along the other dimensions? If not,
(and I suspect it is not the same but I have not been on such a group), can you
clarify? I think your points about OWL and versioning and
inconsistencies is OK but I cannot imagine any formalism that could avoid such
problems unless it disallowed negation in any form... I am less worried about
the formalism than about the kind of knowledge that is to be put into it. For example, are semantic constraints ones that ensure
the values of data exchanged are understood by computational processes of
either party in the “correct way” to ensure proper business interaction (so
that a “container” of a product is not understood to be a bottle on one side,
and a ocean going shipping steel box on the other). Or is our semantic model an
ontology of the “document,” understood as a bunch of aggregated BCCs and ASBIEs
etc? How do we decide which kind of semantics is needed for translational
fidelity? Or do we have reasons why following the constraints in terms of composition
our of BCCs and so forth must also promote correct business interaction? </Comment> |
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]