OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ubl-dev message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: RE: [ubl-dev] Datatype Methodology RE: [ubl-dev] SBS and Restricted Data Types


On Wed, 10 May 2006, Fulton Wilcox wrote:

>>All:
>>
>>In the context of boundary management between what are UBL opportunities and
>>obligations versus what is trading partner internal particularity, I put
>>together the following comments on "filtering." I suggest that generally
>>UBL-level filtering is at a high risk of being: 1) irrelevant 2) duplicative
>>of other, more authoritative filters, 3) have the opposite effect from
>>"future-proofing" by inadvertently ruling out new uses of UBL or
>>complicating version upgrades.
>>
>>To add some specifics:
>>
>>... long but easy-to-read in-depth analysis of 
>>    supporting arguments removed ...
>>
>>Therefore, I suggest that minimal filtering be done as part of UBL - only
>>that which has predicable, net value-added results. For very large subsets
>>(e.g., all "small businesses") much the same holds.
>>
>>
>>					Fulton Wilcox
>>					Colts Neck Solutions LLC
>>

Thanks for sharing this very nicely summarised article on why UBL
should have minimal "filtering".

As you laid out in the bullet points, the "filtering" doesn't go away,
as it is much dependent on individual usage context, which in turn
imposes local customisations and requirements to what would be a standard
representation (UBL format).  Where the "filtering" isn't catered for
in the standard (describing how, when and in what form to
customise/filter), it gets deferred to higher layer processings and
either therefore requires application perform such filters and checks,
or else bombs the application and causing undesirable "deep-in"
error processing and recovery, much of which you've also mentioned.

I think a lot of the debate is a result of how much people want to
"push down" the rejection trigger towards the schema validation
layer.  Many realise the benefits of early rejection of incorrectly
formatted data, and so that gets translated into, 

     "Can the schema describe only those data format I want 
      and so all others, even if they conform to UBL standard, 
      get automatically rejected?"

If I may (yet again) abuse the word, there's certain level of
"overloading" of the schema to process application/business-level
requirements and logic.  For example, let's look at 
Joseph Chiusano's string-length example again.  An application
may require a particular data element (e.g. Name) to be 35 characters
max, while another totally different app may need a fixed 60-char
space-filled name.

The standard (currently) allows unlimited.  The question becomes
whether one should write separate software modules to check the
lengths of Name fields for 35 & 60 characters respectively.
Schema has the ability to describe limits of 35, 60, but not the
ability to transform incoming non-space-filled names into space-filled
ones for the latter application.  

So for the former, one could perhaps get by easier with requiring 
schema to reject names of larger than 35 chars.  But for the 
60-fixed-char app, the need for space-filling means one needs to 
write a (albeit short) software anyway, which could well do the 
check for 60-char max requirement.

Which part is business/application logic and requirement, and which
part is standard-conformance requirement seems a bit grey from the
user's perspective.  But I agree with you that if the standard is
made very clear, that it filters nothing that is particular and
permits a maximum space for carrying data, then it would be clear
to end-user that any form of "further processing" is application's
responsibility.

I'd like to, in support of your arguments, also point to the 2-Phase
view of processing I mentioned earlier:

    Phase 1::   Process & validate incoming data instance
                against standard schema NORM-SCHEMA 
                (normative, uncustomised)

    Phase 2::   (a) Process & validate incoming data instance
                    against application-oriented schema APP-SCHEMA
                    (customised, localised schema with either
                    same namespace as UBL [but not published and
                    only used in-transit within the app] or just
                    namespace-less),

                (b) Use software module to inspect and perform
                    active error checks (active means triggering
                    suitable actions/responses on error)

                (c) a combination of (a), (b) and other techniques
                    available and local to the end-user.

While Phase 2 is always needed for real-life applications, Phase 1's
focus would be, as you mentioned, to permit a broader-based approach
to accomodate various data, thereby achieving interoperability at
the broadest level without unnecessarily imposing limits on
particular groups of users.



Best Regards,
Chin Chee-Kai
SoftML
Tel: +65-6820-2979
Fax: +65-6820-2979
Email: cheekai@SoftML.Net
http://SoftML.Net/






[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]