OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Subject: Stages of comparison processing (Was: Kirill's Questions on...Iron Man)

Splitting Ken's comments for emphasis:
1>My recollection is that the submitter would send the output file,
2>we would generate and publish the infoset instance of the output,
3>the user would run canonicalization on our supplied infoset instance
4>and the infoset instance of their generated output,
5>and then compare the two canonical outputs.
5>Since the canonicalization of both files is done by the user, there
5>should be no problems in line endings.

So the answer to Kirill's question is derived as follows:
We ship the InfoSetized version of the correct output (2 above),
not the version originally submitted by the submitter (1 above).
I thought that the same name(s) would be used for both the shipped
reference output file and the "actual" output generated on-the-fly as
the test is run. In Iron Man (and Germanium Man), the output-file
element(s) thus designate both. If we want to modify that design, now
is the time! Please be ready to comment at our 11 July meeting about
whether the catalog should:
(A) Specify the name shared by both actual and reference files,
(B) Use the same front part of the name for both and use file tags
    (like .xml and .ref) to distinguish the two, or
(C) have separate file names and expand <output-file> to hold the
    names of both actual and reference.
Remember that one test case may generate more than one output file,
and the catalog entry for a case may hold multiple <output-file>
elements. Incidentally, the InfoSetized outputs are always XML, so
one filetype suffix should handle all, with a possible exception for
console output.

We may wish to discuss structure of the directory tree of reference
outputs before deciding the above. Ken: please put this on the
agenda. I can generate a post-Iron design right away once we decide.

I can also arrange a guest appearance by our automation guy, if you want
to discuss how the test harness would step through all the results. We
may want to supply a styesheet that creates a batch/script file from
the test catalog, the said file having semi-generic command lines for
the comparison of each anticipated output.

Another implication of Ken's supposition/recollection of how compare
happens is that "normalization" is split. We do the InfoSetizing of the
reference output files, but the Test Lab has to do canonicalization after
downloading the suite. In fact, they must canonicalize platform-specific
reference files (line 3 of the quoted text at top of this document) for
every OS or other variation in the test environment that affects file
content. They only need to do that once after downloading, whereas both
InfoSetizing (4 above) and canonicalization (between the lines of 4 and 5)
must be done on the actual outputs every time as part of running the test
suite. This has implications for the test tools and/or stylesheets we
ship. We also need to be clear about the requirements for file compare
(5 above).

Remember that even though we can InfoSetize the reference-output file,
thus making it XML every time, the Test Lab must InfoSetize the actual
output, which can be XML, HTML, Text, and other types later. Therefore,
it is still necessary for the "compare" attribute of the scenario to
show all filetypes, so that the harness will know which InfoSetizer to
.................David Marston

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Powered by eList eXpress LLC