oiic-formation-discuss message
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [List Home]
Subject: Re: draft proposal 0.3 - Deliverables
- From: david_marston@us.ibm.com
- To: oiic-formation-discuss@lists.oasis-open.org
- Date: Wed, 30 Jul 2008 00:35:27 -0400
[I have been lurking since this list
was started. I think the discussion is reaching a point where my experience
with the OASIS Conformance TC and XSLT Conformance Testing TC may yield
some useful comments.]
The ODF Interoperability & Conformance
TC will create some new material to aid conformance testing, but they could
also gather material from outside sources into a collection. I would suggest
that Part 1b of the charter be modified from "produce materials"
to "gather and produce materials" to ensure coverage of all possibilities.
My experience with XSLT testing was that the TC gathered test cases (details
on "test cases" below) from contributors and originated documents
about testing methods. That may be a good division for the ODF I&C
TC to follow, but the charter should not be that precise.
The XSLT Conformance Testing TC planned
to deliver the following test materials:
Test Case Metadata, as an XML file
Test case inputs
Reference outputs that show the correct
behavior of a test case
An outline for an Implementation Conformance
Statement that would accompany an implementation to be tested
(We didn't want to write Test Assertions,
but we experimented with post-processing the specs to isolate statements.
It wasn't as precise as we needed. A modern-day Conformance TC might want
to deliver a set of test assertions. See the activities of the OASIS Test
Assertion Guidelines TC [1] for the latest thinking on this subject.)
In the above enumeration, the test case
inputs and the reference outputs are the constituents of test cases. Inputs
can be shared across test cases if the metadata specifies re-use. In effect,
the metadata is what identifies each test case, and so determines the number
of cases in the collection. Since XSLT is a processor, the test regime
is: provide the inputs specified for the case, invoke processing, capture
the output (or error message), and then compare the actual result against
the reference result (or error message specified in the metadata) using
the comparitor specified for that case in the metadata. Comparing equal
indicates conformance; comparing unequal indicates non-conformance. Other
classes of product may have simpler or more complicated testing regimes.
All the test materials provided were platform-independent (XML or HTML).
Test case metadata can also provide
filtering information, to avoid running (or at least evaluating) irrelevant
cases. Test cases are annotated if they are not universal, which could
be due to a mismatch on any of the dimensions of variability from [2] or
also because they only apply to certain versions of the spec. To determine
relevance, the implementation being tested needs an Implementation Conformance
Statement, which will specify the choices made on the various dimensions
of variability. See Part 9 of [3] for a discussion of these statements.
In the ODF case, an implementation would probably specify something about
which features and/or profiles it implements, plus the spec version and
what class of product it is.
One more possible use of test case metadata
is to segregate tests that are in a preliminary state or that have been
questioned. An assessment run might exclude such cases, while developers
would want to run the preliminary ones to plan for future assessments.
Thus, the TC will need to define a set of status codes for test cases.
These codes are usually pretty similar from one conformance testing group
to the next, but vary depending on the policies about reviews and the like.
The other kind of deliverable is a guideline
document, such as one that tells a test lab how to finish the setup so
that the materials obtained from the TC are combined with local resources,
resulting in an executable test environment. Drafts of the charter for
this TC have mentioned "Conformance Test Requirements" or a "Conformance
Assessment Methodology Specification" as a deliverable. That document
would be the guideline addressed to the test lab. It needs to address executing
test cases and evaluating the result of each. (Terminology check: executing
the test gives a "result" such as a rendered document. After
comparing the actual result against the reference result, one then has
an "outcome" such as Pass, Fail, Inapplicable, etc.) The guideline
must be very clear about which materials from the TC must be used in order
for the lab to say it ran the OASIS ODF test suite. Likewise, it should
be very clear about what software the test lab will need to write or obtain
elsewhere. In particular, it should address the requirements for comparitors.
A comparitor is typically a software
module that compares two files, taking into account only those aspects
that are relevant. In other words, when the equivalence must be something
other than pixel- or bitwise-perfect, you use a comparitor that has the
correct tolerance. For example, nearly all XML comparisons require equivalence
of the XML InfoSets (see [4]) rather than character-by-character equivalence.
Comparitors are closely intertwined with canonicalizers, which have been
mentioned earlier on this list. For some kinds of output, the comparitor
must be a human being. Other situations may start as a human compare and
become susceptible to automation in the future, so it is best to describe
the comparitor by its function rather than simply say "manual"
or something equally simplistic. If this TC gains enough momentum, it may
stimulate the creation or enhancement of open-source comparitors by others,
which would benefit conformance testing in general.
Other guideline documents could be addressed
to those who would submit test cases (see [5] for an example) or those
who want to interpret results reported by a test lab. This message is not
intended to be an exhaustive list of all reasonable deliverables for this
TC, just a refinement of the possibilities regarding test materials and
guidelines for their use.
.................David Marston
IBM Research
[1] http://www.oasis-open.org/committees/tag/charter.php
- no documents have moved beyond draft stage as of this writing
[2] http://www.w3.org/TR/2005/NOTE-spec-variability-20050831/
[3] http://www.oasis-open.org/committees/download.php/305/conformance_requirements-v1.pdf
[4] http://www.w3.org/TR/2004/REC-xml-infoset-20040204/
[5] http://www.w3.org/XML/Query/test-suite/Guidelines%20for%20Test%20Submission.html
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
| [List Home]