OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

set message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: Re: [set] The topic of the next meeting...


Dear Stephan,

What Yalin does is the following:

1. Generate OWL ontologies from currently existing classifications like
UNSPSC, SITC, ISIC, etc.
2. Align different such ontologies by indicating "equivalent classes", 
"subclasses", "union of"
relationships among them.
3. After this compute the inferred ontology through a reasoner. This is 
done only *once*
if the ontologies remain the same. If they change, do the alignment for 
changed classes and
recompute the inferred ontology once again. This inferred ontology 
remains valid until
the involved ontologies change. The question is how often UNSPSC changes?
And if it changes can you ignore such changes?
4. Annotate CCs and BIEs with the nodes of this inferred ontology to 
give them context.
The inferred ontology will reveal you all the other explicit or implicit 
relationships
this CC or BIE is in with other context nodes.
5. Use this extra information for discovery of CCs and BIEs for 
customization
and reuse.

I hope it is clear now.

Best regards,

Asuman

Stephen Green wrote:
>
> Maybe I should have written this up better - perhaps
> as a position paper or something. Apologies.
>
> I've been reading through Yarimagan's thesis:
> "Semantic Enrichment for the Automated ..." 03/2008
>
> http://www.oasis-open.org/apps/org/workgroup/set/download.php/28845/20080408Yalin_Yarimagan_tez.pdf
>
> and it is compelling reading but it is focusing my
> thoughts on the requirement I tried to describe
> previously which I realize is that of 'Governance' or
> in other words 'Change Control'.
>
> This is especially exemplified as a requirement in
> pages 43-45 where starting in p43 paragraph below
> Fig 3-9 it reads
> "Once different ontologies are aligned by specifying
> all such correspondences, a description logic
> reasoner computes the inferred class hierarchy for the
> context domain" and then later in page 45, second
> paragraph it reads similarly
> "Once the inferred ontology representing the
> Industrial Classification domain is computed, classes
> from that ontology are used for annotating customized
> UBL components to express their context...".
> This begs some comments to exemplify the need
> for change control and other such governance:
> 1. Is it ever the case that "all such correspondences"
> are specified? OWL, as far as I know doesn't allow
> us to say "No more specification for this ontology" so
> the ontology can evolve forever. This may be fine for
> a semantic web but presents a huge problem for B2B
> interoperability in that interoperability requires us to
> say - this *is* the ontology to be used in this trading
> scenario. Therefore it cannot be allowed to change
> without strict control.
> 2. The example taken on page 42 and following is a
> key one to further clarify my point - NAICS code 11
> Agriculture, Fishing and Hunting which corresponds,
> the example reads, to ISIC codes A - Agriculture and
> Forestry and B - Fishing. The ontologies allow this
> correspondence to be specified. Now what happens
> if the NAICS codes ever change. The correspondence
> may have to change. Here there is a requirement
> which OWL as it stands isn't yet designed to meet;
> that of removal of an old, obsolete or erroneous
> specification of some of the logic like a correspondence.
> Even if we aren't yet at the stage of evaluating the
> tools for the work, there are requirement to capture
> here
> - to be able to remove or replace and old or incorrect
>   ontology
> - to be able to say that a given ontology is fixed as
>   a particular agreed version
> - to provide other version and change control as
>   typically required for legally defined B2B exchanges
>
> Maybe this just needs some time waiting for future
> OWL work to include such capabilities or maybe it just
> needs some intermediate use of another ontology
> language. Maybe it just needs there to be a cut-off
> point in specifying correspondences, etc and at that
> point the generation of an artefact which is more
> readily change-managed and versioned. Say artefacts
> could be generated at this point and these be the
> official fixed version - say genericode, CAM, etc. The
> ontologies then carry on evolving but under strict
> control and with a way to take out anything like an
> old, obsolete ontology and replace it in a way OWL
> doesn't support but tools which use OWL might support.
> Then at another fixed point the artefacts are again
> generated and these are published but the ontology
> itself remains read-only to endusers other than
> maintainers. There has to be a way to fix it in time
> and space and control updates, etc. I think so.
>
> Best regards
>
> Stephen Green
>
>
> 2008/7/15 Stephen Green <stephen.green@documentengineeringservices.com>:
>   
>> These are key functional requirements.
>>
>> Another area of functional requirements which may
>> be orthogonal to these is the matter of context and
>> context-driven interoperability. I think we have plenty
>> of existing coverage of these requirements such as
>> your excellent thesis work. I would just add that there
>> needs to be an attempt to fit the context-driven
>> architecture (I guess a specialisation of model-driven
>> architecture) into the 'dimensions of variability' use
>> case scenarios I mentioned such as subsets, levels,
>> NDRs, etc. How do these dimensions relate to the
>> context dimensions?
>>
>> Then I think we also need to consider what might be
>> described as non-functional requirements such as:
>>
>> 1. the final architecture needs to work alongside or
>> within the realm of the technology used to specify
>> the original and customised documents - namely -
>>  a) specifications/standards
>>  b) conformance clauses and conformance testing
>>  c) associated testing and assertions artefacts such
>>     as formal models (abstract and concrete), XML
>>     schemas (and EDI and ASN.1 equivalents), codelists
>>     such as genericode, test assertions and schematron
>>     assertions, rules, etc.
>>  d) test suites, monitoring technologies and general
>>     software and frameworks such as ebXML and
>>     proprietary products
>>  e) repositories and databases with their registries and
>>     indexes which store and retrieve the above artefacts
>>     and any artefacts associated with SET deliverables
>>     or architectures (e.g. if ontologies are used it might
>>     be that there are use cases where they has to be
>>     of a kind which can be stored in and retrieved from
>>     databases which already link to SOA repositories)
>>
>> 2. There is clearly a link to semantic web concepts
>> within the charter of SET and this leads to a further
>> non-functional requirements which follows on from
>> 1. above: The architecture for interoperability has to
>> support determinism. It has to possible for two parties
>> in a collaboration to make the same identical
>> conclusion about what mappings should be made. This
>> puts a constraint on the technology and algorithms
>> used because it must be possible to derive the same
>> assertions from all conformant ontologies. Add to this
>> that it must be possible to extend and deprecate
>> existing ontologies: A versioning strategy must be
>> achievable such that changes in version still maintain
>> the rule above that either party can determine the
>> same results when using conformant algorithms acting
>> on conformant ontologies and sets of ontologies.
>> (Does this lead to a requirement to have a closed
>> list of ontological artefacts? I think it does, such that
>> deprecated ontologies can be removed and new ones
>> added or existing ones changed such that the outcome
>> of applying any reasoning is always deterministic and
>> always the same for any conformant implementation.
>> As such I would want to question how appropriate
>> web ontologies are as yet where OWL, etc seem to
>> support a high degree of openness - any ontology is
>> valid as long as it doesn't cause contradictions and
>> no ontology can be removed. This seems to conflict
>> with this non-functional use case.)
>>
>> Best regards
>>
>> --
>> Stephen D. Green
>>
>> Partner
>> SystML, http://www.systml.co.uk
>> Tel: +44 (0) 117 9541606
>> Associate Director
>> Document Engineering Services
>> http://www.documentengineeringservices.com
>>
>> http://www.biblegateway.com/passage/?search=matthew+22:37 .. and voice
>>
>>
>> 2008/7/15 Stephen Green <stephen.green@documentengineeringservices.com>:
>>     
>>> An obvious requirement is for interoperability
>>> between subsets.
>>>
>>> Main use cases:
>>>
>>> 1. One or more subsets have a difference of namespace to the other subsets
>>>
>>>  a. there is a model to which all the subsets conform
>>>
>>>   i. there is a set of core components to for all of model
>>>
>>>   ii. there are core components for some of the model
>>>
>>>   iii. there are no core components for this model but
>>> there are core components for another model (perhaps
>>> an earlier version of the model) which is closely related
>>> to the subsets' shared model
>>>
>>>  b. at least one subset is based on a similar but not
>>> identical model to the model on which the other
>>> subsets are based
>>>
>>>   i. all models have core components
>>>
>>>   ii. some but not all models have core components
>>>
>>>   iii. no models have core components
>>>
>>>
>>> 2. All subsets have the same namespace and there is
>>> a single set of schemas to which each document
>>> conforms
>>>
>>> Same a, b i,ii,iii as above - in other words these use
>>> cases form a table of three dimensions. Each
>>> dimension could map to a 'dimension or variablility'.
>>> (I won't attempt to create and populate such a table
>>> because the dimensions need to grow beyond just
>>> three. Dimensions of variability is a concept I'm
>>> familiar with from the work of the OASIS Test
>>> Assertions Guidelines TC and prior referenced works.)
>>>
>>> Three of the dimensions for subsets are then
>>>
>>> D1. Namespace variation (loose subsets where
>>> variation of namespace is allowed)
>>> D2. Model variation
>>> D3. Core Component variation
>>>
>>> Are there other similar dimensions of variability for
>>> subsets? E.g.
>>>
>>> D4. Core Component harmonisation (e.g. TBG17)
>>>
>>> The next set of use cases to consider are extensions
>>> of the same base language. Because they share the
>>> same base language, they may all share a common
>>> subset of the base language plus non-common parts
>>> of the base language plus perhaps some common
>>> extensions plus some individual, non-common
>>> extensions. Each may have its own set of dimensions
>>> of variability. The subsets may have D1, D2, D3, D4 at
>>> least. There may be slightly different dimensions of
>>> variability for the extensions:
>>>
>>> D1. Namespaces - inevitable that extensions will
>>> have different namespaces but also might be more
>>> likely that all elements of the extended syntax will
>>> have a namespace different to that of the base (e.g.
>>> Swedish SFTI extended version of UBL 1.0 Invoice)
>>>
>>> D2. Models - each model most likely to be very different
>>> for extensions
>>>
>>> D3. Core Components - possibly that this is only thing
>>> extensions have in common
>>>
>>> D4. Not all extensions will have been harmonised as
>>> harmonisation takes some time, requires standard
>>> body or industry backing and favours initial usage to
>>> prove the requirement of new core components (e.g.
>>> Hong Kong University's approach to CC projects). D4
>>> may also include variability of which core components
>>> are used (US Gov, I understand, have their own and
>>> others may use CEFACT's) and levels of progress
>>> through levels in a harmonisation process which may
>>> have different top levels such as CEFACT TBG17 or
>>> some other harmonisation top level.
>>>
>>> A third set of use case will be based on interoperability
>>> between languages which are different but all seek to
>>> conform (in presence or absence of conformance
>>> clauses) to CCTS (Core Component Technical
>>> Specification). Here D3 and D4 apply of course but
>>> there will be other dimensions such as:
>>>
>>> D5. Usage of XML or EDI, etc
>>> D6. Variations in datatypes
>>> D7. NDRs
>>> etc.
>>>
>>> Best regards
>>>
>>> Stephen Green
>>>
>>> 2008/7/15  <asuman@srdc.metu.edu.tr>:
>>>       
>>>> Dear Colleagues,
>>>>
>>>> During our next meeting we will address the requirements of electronic business document interoperability.
>>>>
>>>> Although we are all familiar with these problems, before we proceed any further it will be good to gather the specific requirements and use cases that you might have and wish to be discussed. So I look forward to receiving your input briefly describing the requirements or use cases by August 4, 2008, Monday.
>>>>
>>>> Thank you,
>>>>
>>>> Asuman
>>>>
>>>>
>>>>         
>>>
>>> --
>>> Stephen D. Green
>>>
>>> Partner
>>> SystML, http://www.systml.co.uk
>>> Tel: +44 (0) 117 9541606
>>> Associate Director
>>> Document Engineering Services
>>> http://www.documentengineeringservices.com
>>>
>>> http://www.biblegateway.com/passage/?search=matthew+22:37 .. and voice
>>>
>>>       
>
>
>
>   


-- 
____________________________________________________________________________
Professor Asuman Dogac             email: asuman@srdc.metu.edu.tr
WWW: http://www.srdc.metu.edu.tr/~asuman/
Director                           Phone: +90 (312) 210 5598, or
Software R&D Center                       +90 (312) 210 2076
Department of Computer Eng.        Fax: +90 (312) 210 5572                      
Middle East Technical University        +90 (312) 210 1259
06531 Ankara Turkey                      skype: adogac





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]