[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: RE: [ubl-dev] SBS and Restricted Data Types
(cc to named emails removed as I believe both Joseph & Stephen are on ubl-dev list) Sorry, if I may jump in here a bit, both to clarify or else learn from you guys if you convince me the other way around. I don't think the notion of "customisation" and "subsetting" should be put together and talked about as one. Their relationship, from what I understand, is quite orthogonal. In other words, one could talk about how to customise UBL schemas in general first without worrying about the notion of subsetting, which was what historically had happened with the creation of UBL Customisation Methodology Document way before even the idea of SBS subsetting came about, OR subsetting methodology like those of SBS could proceed without linking to customisation as to how (in what manner) types could be extended/restricted. Subsetting literally picks out already existing elements by virtue of some criteria (yet another axis orthogonal to customisation and subsetting) such as may include small business common requirements, most most frequently used elements, etc. Whether the subsetting is described in XPath text, XPath XML, English, or even XSD schema (without touching the extension/restriction areas), the idea still remains the same, that it is a subset, not a customised (ie, type-modified) form of UBL schemas. If there's an implementable UBL Customisation Methodology (coz' I don' think the current document, in its UBL1.0 form, is implementable, may be UBL 2.0?), it would affect all forms of UBL schemas, derived or normative, such that any subsetting methods used such as those of SBS would Granted, one could always argue that a type-modified schema is a schema-subset of UBL. But this group of schema-subset is a subset from the schema space that happens to match the output of subset methodology used in SBS (ie, selecting which elements on a verbatim basis). Type modification can also be done on adding elements, types, extending enumerations, or restricting string lengths just to name a few. These would be different from the outcomes of subsetting as used in SBS. Subsetting as used in SBS is more an instance-space subset, since picking elements in the schema space on a verbatim basis (that's why using XPath has an advantage here, though it is not be construed that XPath is the only way to do that) automatically results instance data element subsets would match those instances accepted by original schema elements exactly. This is more a coincidence-by-design rather than a proof that subset is customisation. They (schema-subsets & instance-subsets) are fundamentally different space of entities altogether even though we often speak of them like one and the same. Set-entities within schema-subset would be schemas or fragments of schemas (eg. element, complexType, etc), while set-entities within instance-subset would be XML instances or fragments of XML instances (eg. <Address>137 El Camino Real</Address>). >>have been saying that in order to conform to UBL, an implementation of >>UBL must not reject an instance that the normative schemas would not >>reject. However, if an implementation restricts - without changing the This is where UBL conformance needs to be worded clearer isn't it? The case you mentioned about 100-char string being rejected for a 30-char application requirement, thus implying the application is non-conformant, I don't see a contradiction here though. Phase 1===== Standard Validation & Verification Step 1:: INPUT: 100-char string Step 2:: PARSE: relevant NORM-SCHEMA (UBL-normative) selected and parsed as valid Step 3:: VALIDATE: INPUT pitched against NORM-SCHEMA for validation Step 4:: OUTPUT: OK Phase 2===== Application Validation & Verification Step 1:: INPUT: 100-char string Step 2:: PARSE: relevant APP-SCHEMA (derived UBL schema with 30-char restriction. For optimisation, it could also be implemented as a delta schema. But such tricks are just optimisation; they don't change the conceptual notion of a derived UBL schema) parsed as valid Step 3:: VALIDATE: INPUT pitched against APP-SCHEMA for validation Step 4:: OUTPUT: Rejected due to excessive length This 2-phase step-by-step illustration is to show that there's no contradiction in implementing UBL with restrictions or for that matter customisations. "An implementation of UBL", as per your wording, in real life, would most likely need finitely parameterised end-points for types like strings, integers (depending on its use), decimal points, etc. So most likely, a practical implementation in real life would do an optimised, faster form of the above Phase 1+2. It doesn't show contradiction nor proof of non-conformance as far as I can see, unless I'm missing something really obvious from your example. Thanks. Best Regards, Chin Chee-Kai SoftML Tel: +65-6820-2979 Fax: +65-6820-2979 Email: cheekai@SoftML.Net http://SoftML.Net/ On Mon, 8 May 2006, Chiusano Joseph wrote: >>The challenge would be to determine which contraining facets should be >>supported for which data types. One could support all constraining >>facets for all W3C Schema data types, but - as you say - this would >>require a fair bit of work. Also, unless one repeats the data type that >>is in the UBL schema that is being subsetted, one would still need to >>match the constraining facet (though an additional pass, perhaps - >>depending on implementation) with the data type in the UBL schema to >>ensure that that constraining facet is valid given the data type (e.g. >>that there is not an "xsd:maxExclusive" facet specified for an >>xsd:string data type). >> >>One could support a selected set of constraining facets across-the-board >>for all data types, and then document this as part of SBS - with the >>notion that implementing a selected set means less costly >>implementations, which is good for small business. >> >>Another aspect to consider is this: All along during this thread, we >>have been saying that in order to conform to UBL, an implementation of >>UBL must not reject an instance that the normative schemas would not >>reject. However, if an implementation restricts - without changing the >>normative schemas - a string value to, for example, 30 characters, and >>an instance is received that is 100 characters - which would be >>considered valid according to the normative schemas - then we are saying >>that the instance with 100 characters must not be rejected, or the >>implementation is UBL non-conformant. This definition of conformance may >>- I believe - therefore need to be relaxed, as how can one have it both >>ways? (data type restrictions *and* UBL conformance). >> >>Just some things to think about... >> >>Joe >> >>Joseph Chiusano >>Associate >>Booz Allen Hamilton >> >>700 13th St. NW, Suite 1100 >>Washington, DC 20005 >>O: 202-508-6514 >>C: 202-251-0731 >>Visit us online@ http://www.boozallen.com
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]