OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Re: Halfway to genericizing the test case catalog


At 01/06/29 20:31 -0400, David_Marston@lotus.com wrote:
>Below is the generic version of the design for catalogs of test cases.

Thank you for all the work on this, David.

>I'm also considering
>making Date an attribute hanging directly off the test-case element, as the
>category does--would this make automated insertion any harder?

Since you document the test-case date information item as being for the 
submitter's purpose (I don't see a purpose for our committee), I don't 
think the answer is important.

>Ken's prior feedback also included this statement: "My personal convention
>regarding the attribute/element religion is that #PCDATA is *reserved* for
>human-readable text and attributes are *reserved* for machine-readable
>text, and I coerce my models to meet these very strict constraints." My
>religion says that with XML, especially the way we use it, it's hard to say
>when some information is really exclusively for human use.

Then let me rephrase that as "language use".

>However, the
>reason I have retained most data in elements is more pragmatic: the
>multiple instances of compound data, such as multiple spec-citations, can
>be more easily handled as elements. I think the machine has greater needs
>for structured compound data than does the human. Readers are invited to
>submit their own "religious" statements, but what we really want are
>rational reasons.

I thought I had successfully defended the rational decision.  Using the 
word "language" in place of "human", I feel strongly that #PCDATA (whose 
presence of which defines the term "mixed content") should be reserved for 
language text because of the opportunity to embed elements in the mixed 
content.  Any content that isn't language text belongs in attributes (even 
if the cardinality forces multiple sub-elements each with an attribute to 
overcome the lack of multiple attributes).

To quote chapter and verse of the XML Rec, section 3.2.1 and 3.2.2 define, 
respectively, element content and mixed content.  By definition, only mixed 
content can "contain character data, optionally interspersed with child 
elements".  I feel strongly the opportunity for content to be interspersed 
with child elements should only be offered to language text and never to 
machine-oriented (another word, anyone?) text.  Non-language content can 
(and should?) be reserved for element content, thus requiring values to be 
maintained in attributes.

Again, I won't ever force my ideas on the committee and will accept the 
model as you have supplied, but I didn't want my ideas to be misrepresented 
as merely religious belief.

>Dublin Core can be read either way, since they use
>"attribute" and "element" interchangeably when referring to their data
>items.

Correct ... they don't impact on information design, just component definition.

>I didn't have time to investigate the metallic properties of germanium; I
>just chose it as the closest word to "generic" among the elements. If any
>reader is a closet Materials Scientist, please advise about appropriate
>substances for the analogy.

:{)}

>To produce a design for a particular instance of processor testing, one
>must adapt this document as follows:
>A. Produce a union list of all the normative documents that affect
>conformance testing of this type of processor, and give them short names.
>(Check whether earlier OASIS committee work has already set some short
>names. Re-use is good!)
>B. List all the scenarios for running test cases and comparing output
>against the "correct" output. Assign names for "operation" and "compare"
>aspects of the scenarios.
>C. Decide whether there is a default assumption about the input files. For
>example, in testing an XSLT processor, we assume that the inputs are
><Source>.xsl (the stylesheet) and <Source>.xml (the data) unless the
>catalog data dictates otherwise. If no default rule exists, then the
>input-file element of the scenario becomes required.
>D. Decide whether there is a default assumption about the outputs. For
>example, in testing an XSLT processor, we assume that the output is a
>single file, of varying type according to the scenario, unless the catalog
>data dictates otherwise. If no default rule exists, then the output-file
>element of the scenario becomes required.
>E. OASIS chooses a set of categories, if desired. The "category" attribute
>may be removed if categories won't be used.
>F. Identify all areas in which the specs grant discretionary choices to
>processor developers. (There is some pressure for W3C Working Groups to do
>this as part of producing their Recommendations.) Catalog the available
>choices in each area.
>G. Identify all gray areas in the specs as best you can. Catalog the
>available choices in each area.
>
>This document describes information that should be associated with a test
>case for (1) identification, (2) description and mapping to spec
>provisions, (3) filtering (choosing whether or not to execute with a given
>processor) and discretionary choices, and finally (4) some operational
>parameters. Each test case is represented by input files and the
>operational parameters to set up all inputs for the particular test case.
>The data described below can be accumulated into a catalog of test cases,
>in XML of course, with one <test-case> element for each case. However, good
>code management practices would probably dictate that the creators of these
>cases retain the definitive data in the primary input file. (For XSLT, the
>primary input is the stylesheet.) A catalog file can be generated from the
>primary inputs. The catalog file would be the definitive version as far as
>the OASIS package is concerned. That is, we expect the submitter to provide
>a catalog and a file tree of test cases (including allegedly-correct
>results), and to coordinate with OASIS on a "Title" for the submission.
>
>Within the catalog, each test is represented as a <test-case> element with
>numerous sub-elements. Most parameters would be interpreted as strings.
>Values that refer to versions, dates, and the like can be interpreted
>numerically, specifically in inequality relations. Excerpts of a potential
>DTD are shown.
>
>(1) IDENTIFICATION
>The outermost element of a submitted catalog is <test-catalog> with a
>"Title" attribute to identify it. This design allows various parties to
>contribute test cases and catalogs thereof into an OASIS committee. The
>globally-unique "Title" string should also be valid as a directory name in
>all prominent operating systems. The title can be suggested by the
>submitter, but must be approved by OASIS. Thus, Lotus would submit a test
>suite called "Lotus" and the OASIS procedures would load it into a "Lotus"
>directory (assuming that the name "Lotus" is acceptable to the OASIS
>committee).
>
><!ELEMENT test-catalog ( test-case* ) >
><!ATTLIST test-catalog Title CDATA #REQUIRED >

I disagree ... I think the Title is assigned by the committee *at merge 
time* through the script that brings the submissions together.  That gives 
us the opportunity to change it.  Also, I'm suggesting in the merge control 
DTD to declare the attribute NMTOKEN (I would have preferred NAME but this 
SGML feature is not in XML) to prevent any spaces from being included.

>A submitted suite can have arbitrary directory structure under its
>top-level directory, captured in the "Identifier" element for each case,
>with forward slashes as the directory delimiters. The actual name of the
>particular file (and test case) would be in the "Source" element,

Could this be changed to "Sources" with, respectively, a set of <source 
name="filename"/> sub-elements, one for *every* required input to the test 
(data files, stylesheet, stylesheet fragments, etc.) so the submission can 
be validated as being complete?

>which
>should be a valid file name in all prominent operating systems. The
>Identifier contains the Source string at the end,

How would this accommodate multiple inputs?

As before, I am having difficulty seeing how the scripts are going to work 
with all of the files without having to pull apart a long file path 
specification into components.  Our last meeting voted out the concept of 
using nested elements reflecting the directory hierarchy, so we can't use 
that.  If the "Identifier" includes the entire file spec (path and file 
name), and there is more than one input to the test (every test will have a 
source file and a stylesheet file), will the committee scripts need to pull 
apart the identifier in order to compose the file names of the remaining files?

>but not the Title at the
>beginning.

Yes, I agree.

>Note that the test suite may contain directories that have no
>test cases, only utility or subsidiary files.
>
><!ELEMENT test-case ( Title? , Source , Identifier , Creator* , Date? ,
>   purpose , elaboration? , spec-citation+ , discretionary? , gray-area? ,
>   scenario ) >
><!-- Dublin Core ("DC") used for convenience/standardization where possible
>for meta-data level of this DTD, here we replace FilePath with Identifier,
>per http://purl.org/DC/documents/rec-dces-19990702.htm, "example formal
>identification systems include the Uniform Resource Identifier (URI)
>(including the Uniform Resource Locator (URL))."  Hereafter, quotes within
>comments are from the URI above. -->
>
><!-- DC Title used in place of SuiteName, per "name by which the resource
>is
>   formally known". This must also meet filename constraints: letter first,
>   no spaces, "reasonable" length -->
><!ELEMENT Title ( #PCDATA ) >

Why SuiteName?  I think this is the purview of the committee and not the 
submitter and need not be present in the file contributed by a submitter.

Don't we need a unique title for the test case itself?

Perhaps:   <!ATTLIST test-case ID #REQUIRED>

><!-- DC Source, per "best practice is to reference the resource by means of
>a
>   string or number conforming to a formal identification system," but must
>   meet filename constraints and have no internal periods. This names a
>   single test case. -->
><!ELEMENT Source ( #PCDATA ) >
><!-- Identifier uses forward slashes as separators, begins with the name of
>a
>   directory that is directly within the top directory named per Title, and
>   ends with the name-part in Source. -->
><!ELEMENT Identifier ( #PCDATA ) >

If we remove Title, as I think we should, then the comment above would change.

>OASIS may bless a particular hierarchical organization of test cases. If
>so, then an attribute called "category" should be used to track where the
>test fits in OASIS' scheme of categories. That way, OASIS categories will
>not dictate the directory structure nor the case names. The goal is that no
>case should be marked as belonging to more than one category. A category
>named "Mixed" is needed when there isn't a clean partitioning.

I think "mixed" is up to the committee to provide for in their own set of 
categories.  They may not wish a "back door" to be implicitly available by 
design.  Moreover, it probably should be required to ensure it is 
specified.  If we want the back door for "when there isn't a clean 
partitioning", then we could just leave it as #IMPLIED and have a value not 
specified.

><!ATTLIST test-case
>   category ( !*!YOUR CATEGORY NAMES!*! | Mixed ) #IMPLIED >

<!ATTLIST test-case category NMTOKEN #REQUIRED>

>Submitters should be encouraged to use the "Creator" element(s) to name
>contributors at the individual-person level. They may also wish to use an
>element called "Date" to record, as yyyy-mm-dd, the date stamp on the test
>case. That will allow the submitter to match cases with their own source
>code management systems, and will likely aid in future updates, either due
>to submitter enhancements or W3C changes. OASIS reserves the right to
>insert this element, containing the date received, if no value was supplied
>by the submitter.
>
><!-- Dublin Core Creator instead of Author -->
><!ELEMENT Creator ( #PCDATA ) >
><!-- DC/ISO-8601 Date for the date of submission (from creator's POV) -->
><!ELEMENT Date ( #PCDATA ) >

I think these should be sacrosanct and untouched by the merge process.

This is a good idea to provide components for use by the submitter to 
identify the individual tests.

Extending this to the entire suite, I've added a Creator and Date children 
to <test-catalog> to record the information regarding the entire collection 
(again from the submitter's POV).

>(2) DESCRIPTION AND MAPPING TO SPEC PROVISIONS
>Each test must have a "purpose" element whose value describes the point of
>the test. This string should be limited in length so that the document
>generated by the OASIS tools doesn't ramble too extensively. There would
>also be an optional "elaboration" element whose length is unlimited and
>which may contain some HTML tags.

If we allow this, we won't be able to do validation because we won't know 
which element types will be allowed.

>Nothing in this document should be
>construed as discouraging the use of comments elsewhere in the inputs for
>clarification.
>
><!ELEMENT purpose ( #PCDATA ) ><!-- Max 255 characters, no new-lines -->
><!ELEMENT elaboration ANY >

I am uncomfortable using ANY.  While it allows the "elaboration" element to 
be validated, if there is a element from the HTML vocabulary then a 
validating parser will not have a content model for it to check against.

The validating process I planned was only going to be checking the content, 
not the structure.  If we decide that we will still need a validating XML 
processor to validate the structure of our submitted catalogues, then I 
strongly suggest we select a handful of HTML declarations and declare them.

Perhaps we could use John Cowan's IBTWSH DTD 
http://home.ccil.org/~cowan/XML/ibtwsh6.dtd for those definitions, though I 
suspect some might question the use of a DTD fragment not coming from the W3C.

>There must be one or more "spec-citation" elements to point at provisions
>of the spec that are being tested. Expect that even simple cases will need
>several citation elements. The pointing mechanism is the subject of a
>separate design. The more exact it is, the less need there is for an
>"elaboration" string, and also the better inversion from the spec to the
>test cases. The spec-citation element contains a "Rec" attribute to say
>which recommendation (XSLT, XPath, etc.), a "Version" sub-element to say
>which version thereof, and some form of text pointer. To encourage
>submissions before the pointer scheme is final, the Committee may need to
>accept alternative sub-elements of different names: <section> for a plain
>section number, <doc-frag> for use of fragment identifiers that are already
>available in the spec, and <OASISptr1> for the first OASIS pointer scheme,
>as seen in the early work of OASIS' XSLT/XPath Conformance TC. OASIS
>pointers of types 2 and up may be necessary in the future, hence the
>extendable design.
>
><!-- There must always be at least spec-citation element for the spec that
>is the primary subject of the test suite, and optionally other
>spec-citation elements can be added as appropriate -->
><!ELEMENT spec-citation ( place , Version , version-drop? , errata-add? ,
>errata-drop? ) >
><!ATTLIST spec-citation
>   Rec ( !*!YOUR LIST OF NORMATIVE DOCUMENTS!*! ) #REQUIRED >

<!ATTLIST spec-citation Rec NMTOKEN #REQUIRED>

><!ELEMENT place ( #PCDATA ) ><!-- syntax of content depends on Type -->
><!-- Type is a Dublin Core keyword -->
><!ATTLIST place Type ( section | doc-frag | OASISptr1 ) #REQUIRED ><!--
>More pointer types to come? -->

A committee may wish to constrain or expand this; I've suggested:

<!ATTLIST place Type NMTOKEN #REQUIRED>

... where the value is validated against the test suite configuration.

Fine, though I would have preferred EMPTY content and a ref= attribute.

><!ELEMENT discretionary ( discretionary-choice )* >

I'm changing "*" to "+" since it is optionally included by the parent.

><!ELEMENT discretionary-choice EMPTY >
><!ATTLIST discretionary-choice name CDATA #REQUIRED behavior CDATA
>#REQUIRED>
>!*! Where do we validate the set of names? !*!

In the pass against the test suite configuration.

>!*! How do we limit the behaviors allowed on each individual choice? !*!

Through the use of name tokens and against the test suite configuration.

<!ATTLIST discretionary-choice
           name NMTOKEN #REQUIRED
           behavior NMTOKENS #REQUIRED>

Okay, now during my prototyping I don't see why a test file would specify 
behaviour ... the discretionary document describes possible behaviours, not 
the individual test.  The testing agent can then check the behaviour of the 
processor against the choices as described in the discretionary document.

>Vague areas in the spec are handled in the same manner as the discretionary
>items above, with <gray-area> substituting for the <discretionary> and the
>abbreviated names assigned by the Committee. This is where the errata level
>is likely to come in to play, since errata should clear up some vague
>areas. Once again, the tester has to ask the developer to answer questions
>about their design decisions, and the answers should be encoded using
>keywords which can then be matched to the <gray-area> elements. One test
>case could serve as both a gray-area for one choice and as the lone case
>for errata-add, when that gray-area choice is the one that the errata later
>chose.
>
><!ELEMENT gray-area ( gray-choice )* >
><!ELEMENT gray-choice EMPTY >
><!ATTLIST gray-choice name CDATA #REQUIRED behavior CDATA #REQUIRED>
>!*! Where do we validate the set of names? !*!
>!*! How do we limit the behaviors allowed on each individual choice? !*!

Given the possible transient nature of a gray area to a discretionary area, 
does it make sense to just call them all discretionary and the verbiage 
associated with each will acknowledge their status?

This would require us to ensure the discretionary document includes all 
agreed-upon gray areas.

>(4) OPERATIONAL PARAMETERS
>At Lotus, we have thought a lot about how comments in the test file can
>describe the scenario under which the test is run

These ideas look good.

>, though we have not yet
>implemented most of the ideas. These parameters describe inputs and
>outputs, and a <scenario> element could describe the whole situation
>through its "operation" and "compare" attributes. The "operation" value
>describes how to run the test, while "compare" describes how to evaluate
>the outcome. In the "standard" Operation scenarios, we construct the names
>of the inputs from the <Source> element, and output is expected in one file
>that could then be suitably compared to the "correct output" file.
>"Compare" options include "XML", "HTML", and "Text", corresponding to the
>types of output and the possible methods of comparison. One or more
><input-file> and <output-file> elements could be used to specify other
>files needed or created, and the values of these elements should permit
>relative paths. A single input-file element could be used to specify that
>one of the heavily-used standard input files should be retrieved instead of
>a test-specific input file. (Lotus has hundreds of tests where the XML
>input is just a document-node-only trigger, and we would benefit from
>keeping one such file in a Utility directory.) The implication of the
>latter rule is that if there exists even one input-file element, no inputs
>are assumed and all must be specified.
>
><!ELEMENT scenario ( input-file* , output-file* , param-set? , console ) >
><!ATTLIST scenario
>   operation ( !*!YOUR LIST OF WAYS TO OPERATE!*! ) #REQUIRED
>   compare ( !*!YOUR LIST OF OUTPUT TYPES!*! | manual ) #REQUIRED >

Again, I don't think we should provide a back door in the event a committee 
doesn't want a back door.

Regarding "operation", this would force a submitter to constrain themselves 
to what the committee expects to be allowed ... I guess that is okay ... I 
have to think about this some more.  The committee will decide the 
different ways a test could work, then it will be up to the testing party 
to implement each way required.

Also, given the old:

   compare ( XML | HTML | Text | message | message-XML | message-HTML |
message-Text |

Would it make sense to split this off to be as follows?

   compare values:  (listed in the configuration file)
   message ( message ) #IMPLIED

... with validation to confirm that if compare is skipped that message 
cannot also be skipped.

I also added  "error ( error ) #IMPLIED" if the test is supposed to throw 
an error instead of giving an output for comparison.  This makes the 
compare attribute optional with validation ensuring they aren't all omitted.

><!ELEMENT input-file ( #PCDATA ) >
><!ELEMENT output-file ( #PCDATA ) >

I prefer ref= and I assume the file name is relative to the subdirectory 
specified within the Identifier ... which brings me back to how either we 
or the user of the test suite is going to parse out the subdirectory from 
the Identifier.

>An operation keyword could imply that more or fewer inputs are needed than
>in the "standard" operation.
>
>An operation keyword could imply that extra invocation options or
>environment settings are needed. The Committee could push responsibility to
>the processor developer to provide a script/batch mechanism to take values
>from standardized data and map them to the specific syntax of their
>processor.

Can we not leave this to the testing organization?  I think it is out of 
scope of our committee work.

>The part below shows the connection to the data that the
>script/batch mechanism would apply. This is essentially a special-purpose
>input file. The most likely formats are:
>(1) (type) name=value [new-line delimits?]
>(2) a simple XML element with name and type attributes
>There should be allowance for simple options, such as a one-word option
>that can be present on the command line.

I prefer the name/type/value tuple approach.

><!-- This needs further design. Assume it designates an input file. -->
><!-- This value is only relevant when the operation keyword is of certain
>values -->
><!ELEMENT param-set ANY >
>
>We also want to be able to test that a message was issued (as in
>xsl:message) and that an error was issued. The "console" element will be
>used to designate strings that must be present in either the standard
>output or standard error stream. (The test lab would be responsible for
>setting up capture of the console output.) The compare keyword "message"
>can designate that, when running this test, capture the standard/error
>output into a file, and ignore the file output one would normally check.
>The Committee may need compare keywords like "message-HTML" to say that
>both the console output and an HTML file must be compared. For console
>output, the test of correctness is to grep for the designated string in the
>captured output stream. If a tester wished, they could get actual error
>message strings from the processor developer and refine the test harness to
>search for those exact messages in error output. In that case, the string
>in the console element is used as an indirect reference to the actual
>string.

I've suggested an error attribute on scenario.

><!-- should contain actual error report output string,
>   or could be pointer to another file containing such strings.
>   Less desirable: description of the problem. -->
><!ELEMENT console ( #PCDATA ) >

I disagree that it should contain expected strings, I think it can only 
contain a description.  We cannot ask vendors to supply specific information.

Why is it mandatory in <scenario>?  I've made it optional for now.

>A compare value of "manual" would be used sparingly, for output whose
>format must meet constraints but whose actual data is only known on the
>fly. (Examples: fetch current time, generate random numbers.) Additional
>"scenario" keywords can be devised as necessary, but OASIS should control
>the naming.

The configuration instance can control that.

>The Committee might want to allow names beginning with a
>specific letter to be local to particular test labs. For example, we would
>reserve all names beginning with "O-" and instruct the test labs that they
>should put their name as the next field, then another hyphen, then their
>local scenario keywords (e. g., O-NIST-whatever) that allow them to set up
>local conditions (e.g., use of APIs) as needed.

All within the instance describing the features of the suite.

>HOW IT WORKS
>When rendering a specific instance of the test suite,

I'm still unclear about the file naming conventions, the relationship of 
Source and multiple inputs and outputs to the Identifier value, and how 
both our assembly process and the tester are going to synthesize the fully 
qualified file names.

A follow-on message will describe some of the prototyping I'm doing.

..................... Ken


--
G. Ken Holman                      mailto:gkholman@CraneSoftwrights.com
Crane Softwrights Ltd.               http://www.CraneSoftwrights.com/s/
Box 266, Kars, Ontario CANADA K0A-2E0     +1(613)489-0999   (Fax:-0995)
Web site:     XSL/XML/DSSSL/SGML/OmniMark services, training, products.
Book:  Practical Transformation Using XSLT and XPath ISBN 1-894049-06-3
Article: What is XSLT? http://www.xml.com/pub/2000/08/holman/index.html
Next public instructor-led training:      2001-08-12,08-13,09-19,10-01,
-                                               10-04,10-22,10-29,02-02

Training Blitz: 3-days XSLT/XPath, 2-days XSLFO in Ottawa 2001-10-01/05



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC