OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

ubl message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: Minutes of UBL Atlantic Call 13 October 2004

Dear all
please find below an account of today's Atlantic call.


1. Roll Call and welcome by Mavis the moderator.

Mavis Cournane
Jon Bosak
Marion Royal
Jessica Glace
Anne Hendry
Tony Coates
Sylvia Webb
Paul Thorpe
Marty Burns


 From a single data model the tool would create spreadsheets and schema. 
These could be compared against other spreadsheets that would be 
created for QA purposes.

The spreadsheets would be imported into EDIFIX and it would use the 
current UBL spreadsheet template for this purpose.
The TBG17 template is also supported.

There would need to be an alignment between the data in the 
spreadsheets and what is needed to create consistent output.

This alignment process would need to be discussed at the next face to 

AH: We want to keep the spreadsheets as our modeling format.

JB: The burden of compliance would be on the submitter. They would need 
to submit in the agreed UBL template format.

MB: In accepting submissions it is important to compare the input with 
the EDIFIX output.

JB: We will need to open up the EDIFIX black box to understand how it 
works in order to get an understanding of what alignment will be 

We should view this for now just in the context of 1.1 as we are 
constrained with input spreadsheets as there is a mechanism then set up 
for digesting these things.

There are gating items:
1. We will have some work to make sure that the spreadsheet output 
aligns with our spreadsheet input.
2. With regard to code list spec, there are some aspects of this that 
need to be nailed down so that the tool handles code lists correctly.

MB: Going to a model from XML is not a major issue for the tool. If you 
kept your data model in XML and rendered the schemas based on the data 
model, you could write an XSLT script to do the schemas.

JB: Moving to the registry information model is something we would want 
to look at beyond 1.1.
I am not clear given the fact that the schemas we put out are 
representations of our data model, I am not sure about the distinction. 
Are these linearised and is GEFEG more complex? The model that GEFEG is 
maintaining is something like the data model we have before we create 
specific documents.

AH: I think the GEFEG model has more in it than we can capture in the 

MB: Tony is proposing using XML for code lists and it makes little 
sense to propose something different for this. One approach would be a 
parallel one. The GEFEG tool is migrated along with the rules, the 
codelists are progressed by Tony's vision and we evaluate in the near 
term whether the method Tony has works.
If you made it possible for submission to be in XML and you could 
process it via XSLT or GEFEG you would have a forward looking approach.

GEFEG could give us an idea of what the data model looks like when they 
import it in to their too. Tony could build on that for the specific 
ideas that he has for code lists.

JB: That assumes someone has done the work on the XML realization of 
the data model.

MB: If that is something that could be provided in some format that 
Tony could look at that would be great.

ACTION ITEM: Sylvia will investigate if this could be made available to 
Tony in some format.

JB: Another big question is where does the resource come from to do the 
work regardless of the tool.
IF we accept this proposal we will have the work that generates the 
schemas done for us by the tool.

Is it understood that if we adopt this as tool the data model is still 
the product of the UBL TC.

SW: That is correct.
With respect to EF being used for 1.1 only, has any thought been given 
to what would be required to use it beyond 1.1?

JB: The biggest impediment is that the internal data format is 
proprietary and opaque. IF that were exposed and open everyone would 
feel more comfortable.
Assuming that the ebXML registry information model is perfectly 
enabled, we would have the standard XML representation that Marty was 
talking about, we would be more comfortable with EDIFIX. We would not 
be single sourced then on the supplier.

SW: Issues for us would be resources and cost.

JB: I am really not making any assumptions what lies beyond 1.1 in 
terms of this relationship.

The critical issue is coming to a 1.1 issue of the code list spec, 
until that happens we can't begin to create final versions of the code 
list schemas.

SW: We need to agree to what that spec is. The tool can populate the 
schema with a code list and sub code lists or portions of a code list.

JB: The way those code lists are used is controlled by wired in rules. 
That part needs to be nailed down before any final schemas can be 
We will have a 1.1 code list spec doc, and at that point, let's assume 
we have an EDIFIX guru to maintain the data model, that person won't be 
able to make the changes to realise the code list model.

SW: We won't reduce the tech support to maintain the tool, it is just 
the data input resource that we are looking to reduce.

JB: When the code list spec is final GEFEG will change the tool 

SW: We will fully maintain, enhance and update the tool.

TC: To what degree do we need to use GEFEG to do the code lists.

JB: My impression is that we saved alot of work around that by using 
the fact that GEFEG has already built in the ordinary code lists. If we 
provide these separately, we incur the job of maintaining that 

I think you are suggesting that we can provide codelists by a separate 

We could move the completion of 1.1 code list spec out of the critical 
path if we did late binding of the code list spec.

The best strategy is to get early completion of a revised code list 
spec and use EDIFIX. As a fallback plan we can provide by alternative 
means these code lists schemas.

If Tony and Marty look at this and conclude that there is a more 
congenial way for them to work, then I would not have a problem with 

SW: We would just like the final specs as quickly as possible.

TC: In the long term I would like to see XML used. I don't have an 
issue with the EDIFIX approach in the short term.

Agreed: Tony and Marty will continue to work on the code list spec. 
They will continue to work on the data model in XML format and will 
liaise with GEFEG on how this could be implemented.

JB: Unless something comes up in the Pacific Call I am inclined to 
accept the proposal and I will probably put this to the TC in the 
coming days. For 1.1 we have had a very good relationship. GEFEG have 
been very accommodating and helpful, and we don't have many 

AH: It would be good to have Tim talk to item 2 on the GEFEG proposal 
regarding the CCTS spreadsheets.

Agreed to change CCTS data model to CCTS/UBL data model in item 2 of 
the GEFEG proposal.


    Event reports

Reviewed, nothing new added.

    Liaison reports

    HISC report
Deferred. Jon did add that this SC is looking for members to provide 
    [SSC report: covered as part of EDIFIX discussion]

    Code List Team report
TC: Input from SDML and MDDL has been received by Tony Coates.
Tony is awaiting UBL input.
Jon Bosak will check with Marty Burns if Tony takes team lead and Marty 
continuing to be editor.

    TBG17 Liaison Team report


    UBL FAQ: Status report (Anne Hendry)

It was agreed that this would be worked on by the SSC for now.

    UBL TC meeting in Santa Clara

So far the following people have indicated they will be coming:
Kim, Bill, Mavis, Anne, Jon, Eduardo, Sylvia, Peter Yim, Marion, Mike 

       Possible F2F agenda items:


    Addition of new items to the work list

Alignment will be added.

We will not have a call the week before we meet in Santa Clara i.e. 27 

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]