OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Revised Submission & Review Policy docs and draft Submission Manual


All,
The Submission Policy, Review Policy and Submission Manual are included in
the body of this email.  The "Submission Manual" is an attempt, based on
Ken's suggestion, to separate true policy issues from logistics.  It isn't
really a manual yet, but it should be close to what a Submitter needs to
make a submission. I welcome any additions and corrections you may make.
The Submission and Review Policies incorporate all previous discussions.
(With one exception: the issue that David & Carmelo were discussing about
how to handle differences of opinion from the two Reviewers...welcome any
further comments on that.  Would have included discussions to date but
experienced a massive hard drive failure yesterday which gobbled up 2 weeks
of email.  Luckily I backed up everything else!)  I have included some
editorial questions in square brackets; please respond to these as well as
providing other feedback.  I'll need your comments by Thursday of this week
or so.  Thanks!

Here are a few thoughts about how to represent & render these documents.
Worthy design goals:
0. Use open data standards (well...not Standards, but you know what I mean).
1. Separate the abstraction from the rendition
2. Keep a single up-to-date version of the abstraction
3. Make it easy to create new renditions based on a changed abstraction.
4. Make it easy for other people to change the document abstraction &
rendition.


I propose to create the .txt file abstractions below as simple xml files and
to build a single xslt stylesheet to render them into html.  The xml file
would look something like the example below.  Please give me your comments
on this approach.  Element names can be changed to conform to any document
dtd or schema that the group favors.

<document>
	<title/>
	<text>
		<subtitle/>
		<section/>
	</text>
</document>

Thanks to Lofton for sending out the .svg example.  If anyone wants to
abstract a matrix from the Review Policy, I'll build an .html form to render
it.

Cris

***************Submission Policy*****************
Submission Policy


Introduction

Since the World Wide Web relies on technological interoperability, the
need arises, both for vendors and for product users, for testing product
conformance to the W3C specifications.  The objective of the OASIS
XSLT/XPath Conformance Committee ("Committee") is to develop a test suite
for use in assessing the conformance of XSLT processors to the technical
specifications contained in the Recommendations of the W3C (called the
"Specification" in this document).  The full text of
this Submission Policy and its companion, the Review Policy, are
available online at www.oasis-open.org/committees/xslt.  The Committee
welcomes submissions of test cases from all vendors
and other interested parties.  Tests will be considered for inclusion in
its test suite (according to the Review Policy) on a case-by-case basis.
The Committee will will work toward thorough coverage by accumulating
submitted tests.  The quality and comprehensiveness of these test
submissions will determine how robust the test suite will be.

The Committee encourages all test submissions.  The purpose of these
Guidelines is
to inform Submitters of what the test suite is meant to do and which tests
are more likely to be included in the test catalog, given these design
criteria.
The Committee also encourages Submitters to prepare follow-up submissions,
including repairs to individual tests and significant test expansions.


Submission Guidelines

The first four Guidelines define the scope of test suite.  [check this
definition...]

1. Submitters' tests should test only a single citable requirement in the
Specification.

The Conformance Test Suite version 1.0 is designed so that failure of a
single test identifies, in most cases, non-conformance to a single
assertion in the Specification.  Recommendation citations are in the
form of XPath expressions to testable statements in the XML working
group source documents producing the HTML W3C documents.

In a comprehensive test suite, each testable assertion in
the Specification should be tested independently of each other
assertion, to the extent possible.  If more than one
assertion is tested at a time, the cause of a failure of the test may
not easily indicate the nature of what is specifically wrong.
Non-conformance
to a single assertion will be much easier to identify and to resolve.

If a test follows the first guideline above, one failure points out a
singular instance where the processor does not conform to the
Specifications (called "non-conformance" in this document). The converse is
not true: one non-
conformance may cause failure of dozens of tests that involve various
invocations of the non-conforming situation.  (Example: XT doesn't support
xsl:key, so XT will be judged non-conformant in that respect. Other
processors
may implement keys quite well, but have a certain problem exposed in one
case.
That case may have to include compound assertions from the Specification
if several singular assertions must interact in a certain way to expose
the non-conformance.) [David: not sure I translated this accurately...please
check.]

Some assertions in the Specification are irreducibly compound by nature; in
that case,
a compound test is required.

2. The tests should target specific XSLT/XPath language issues.

The tests should be aimed at the language features and versions that are
included in the Specification.  Issues that cause parser errors or that
involve other W3C specifications that are out of scope for the current
test suite should not be included.  If submitted, the Committee may not,
run or include tests involving multiple assertions of the Specification
composite issues, parser issues or errata of the Specification.  Tests
whose point is to reveal mistakes on parsing the input or on serializing
the output should be excluded.

3. The tests should target "must" provisions of the Specification, not
"should" provisions.

The Specification contains some assertions (or requirements) that are
mandatory ("must") and
some that are optional ("should").  For the version 1.0 of the test suite,
the Committee is concerned with "must" requirements.  "Should"
provisions are the discretion of the implementer.  While the Committee
welcomes submissions of all kinds, those testing "should" provisions may
not be included in final test results.

4. A test should target only explicit processor choices, not unspecified
areas of the Specification.

There are areas of the Specification that do not specify what a
processor needs to do, so it is impossible to test for what they
actually do.  In other areas the processor is given a choice regarding
how it behaves.  The remaining areas are unconditional required
behaviors.

The suite will differentiate test cases based on choices made by the
Submitter.  The Reviewers need to know if a test corresponds to a
particular choice made available to the processor.  (These will be
enumerated in the information included with the catalogue document
model).  The completed test suite will test that portion of The Catalog
of Discretion that is deemed "testable" and where a question or two can
clearly elicit the choice may by the developer.

5. Later versions of the test suite may allow a wider range of tests.

Although, as noted in Guideline 2 above, Version 1.0 of the test suite
will include tests of single Specification assertions of in-scope
language issues, later versions of the test suite may include a broader
scope of issues and may choose to include a wider range of tests.

6. The Committee reserves the right to exclude any test submitted.  Tests
submitted to Version 1.0 of the test suite may be rejected if they do not
comply with these guidelines.

Please see the Review Policy for a full description on how the Committee
will judge eligibility of a test (www.oasis-open.org/committees/xslt).


7. In those instances where a Submitter has a test or tests within its
overall
submission whose creator(s) will be making a separate submission, the
Submitter should filter out those tests so they are not submitted twice.

The Submitter should send the tests it created, plus any tests others
created
that are both 1) free and clear for such use and 2) that the Submitter
doesn't
believe the Committee will have already.


8. The tests will become public. No royalties will be associated with their
use.

The Committee intends to retain the personal names of
Submitters so they may get public credit for their work.


**************end***************

***********************Review Policy********************
Review Policy

Reviewers should refer to the submission guidelines in the Submission
Poliicy, available online at www.oasis-open.org/committees/xslt. The
tests in version 1.0 of the Conformance Test Suite should fail when the
processor is non-conformant with a single "must" provision (see
Submission policy Guideline 1 of the Specification) in scope.  All
accepted tests are intended to test conformance; when a processor can
fail the test and still produce the anticipated result, that test should
be excluded.  To the extent possible, Committee Reviewers should remove
tests exhibiting interpretive behaviors.  This will result in equal
application of Review Policy criteria by all involved, thus producing a
consistent and quality work product.

Differences between Submitter and Reviewer output will be examined by
the Committee, which will reach consensus to 1) accept the test, 2)
reject the test or 3) defer deciding on the test while the issue is
forwarded
to the W3C for clarification.  (See Review Procedure 6 for more details.)


Review Procedures

1. At least two Reviewers will check off on each test.  Only the assessment
of a single member is required for the test to be included in the draft
release.


2. Ineligible tests (by definition) should be rejected.

Eligibility is the quality by which a candidate test submitted by a
submitter is judged to determine whether it ends up in the test suite as
published by the committee.


3. Eligibility should be judged by the following:

	3.1 The accuracy of the test.
	Accuracy of a test is determined by a judgement by the reviewer.  Accuracy
is defined
	as the extent to which the test case actually tests what the submitter
states the test
	case tests.  Accuracy is measured against the baseline of the cited parts
of the
	Specification.  If it does not match, or only partially matches, the test
should
	be considered inaccurate.

	This determination is made by the Reviewer's interpretation of the
Recommendation, and
	if necessary, the opinion of the Committee as a whole, and if necessary,
the definitive
	assessment by the W3C Working Group.

	3.2 The scope of the test.
	See the Submission Policy for a definition of the scope of the test suite.

	3.3 The clarity of the test.
	Clarity of a test is a determination of whether the aspect being tested is
clearly
	described with the anticipated results acceptably explained.

	3.4 The clarity of aspect of the Specification being tested.
	The Test Suite aims to test parts of the Specification and errata that
aren't vague.

	3.5 Should/shall use in the Specification.
	This is the same as "must" and "should", discussed in the Submission
Policy.  The test
	must clearly address a requirement in the Specification that is a "shall"
requirement and
	not a "should" requirement.

	3.6 Determination of whether a test testing a discretionary item.  The
Committee has
	developed a Catalogue of Discretion items, which includes a listing of all
options given to
	developers of the technology in the Specification.  See the website for a
list of
	discretionary items (www.oasis-open.org/committees/xslt).  Not all
discretionary items are
	testable.

      3.7 The simple or compound nature of the test
	Simple and compound tests are described in the Submission Policy.


4. Judge each eligible test through a process


5. Run each test through multiple processors.
Although there is no reference implementation, the Committee will form
consensus on which of the
prominent processors to use.  The baseline is unanimity of their results, as
reduced to infoset-equivalence.


6. Differences between infoset-equivalence of Submitter and Reviewer output
will trigger examination by
the Committee.


7. The Committee will then reach consensus opinion to accept the test,
reject the test, or defer deciding
on the test while the issue is forwarded to the W3C for clarification.

A test can be rejected by Reviewers even if all prominent processors give
the same result when the test
is not a true conformance test. If Reviewers think it's a good test but
different reliable XSLT processors
give different results, issue may be Specification verbiage, processor bugs,
or unclear.

There are several possible (non-exclusive) actions:

	6.1 Reject the test and update errata and errata exclusion
	The test would then be excluded from the published collection.  The Test
Suite control file
	dictating which submitted tests are excluded from the published collection
is updated.
	Furthermore, issuance of an erratum actually gives us a way to include the
test case, subject
	to filtering out at the final stage of "rendering" a suite for a particular
processor.

	6.2 Reject the test with advice to go to W3C.
	In this case, the Submitter thinks the test is accurate and the Committee
agrees the test
	is not accurate and the Recommendation is clear enough that we needn't
bother the W3C with
	an interpretation issue.  Rejection requires consensus of the Committee.
	[or is it only consensus of 2 Reviewers?]

	This scenario begins when the Submitter looks at the Committee report and
sees that a particular
	case submitted was excluded and writes to ask why.  The Reviewer will
respond to explain. The
	response includes reference to the W3C's mail alias for questions about the
Specification.

	6.3 The test case is forwarded to W3C for clarification
	If the above options do not avail, the Committee can forward the test to
the W3C for clarification.

	6.4 Additionally, the Committee may wish to accommodate external comment
from the community at large.

	6.5 The Committee will publish a consensus opinion of response to comment
with justification
	from Recommendation (not just precedence of how a processor has acted).


7. During the testing process, Reviewers will do the following:

	7.1 A Reviewer will report to the list the hierarchy of tests undertaken
for comparison with
	multiple processors.

	7.2 A tally of tests will be tracked on a visible web page for the
Committee.

	7.3 Reviewers report that all tests in a given hierarchy have been
examined, including a summary
	of findings of tests not to be included in the resulting suite.

	7.4 A given hierarchy is not considered complete for a final
	release until reports from at least two members have been
	submitted.  A given hierarchy may be included in a draft only
	after at least one member's report is submitted.


8. During the testing process, the Committee will invite public review:

	8.1 An initial suite of a very small set of files will be used to test
procedures and scripts
	and stylesheets.

	8.2 The Committee will publish draft work periodically, starting with very
small set.

	8.3 The Committee will solicit comments on usability of the product.

	8.4 The Committee will publish a disposition of comments.

	8.5 The Reviewers will continue testing the files until all the hierarchies
are covered.

************end****************

*****************Submission Manual*********************
Submission Manual

[We should complete and update this if it isn't up-to-date or
comprehensive.]

Submission Logistics

The Committee aims to make submission easy both for Submitters and
for test labs.  To this end it is necessary to organize each test case
within a
submission and each submission within the catalog of submissions.  This
process includes the necessity of producing a unique fully-qualified
name for every test case.  If that is not available from the Submitter,
the Committee will rely on the Submitter's subdirectories for the
distinction.

1. The Committee will assign a unique test identification (ID) for its
use during submission processing.

The scope of uniqueness of the ID is bounded by the submitter's collection
and will be unambiguously modified across the set of collections.


2. The Committee will create a directory tree of test cases, based on
the Submitter's ID where the first level down the tree separates out the
tests by Submitter. [Is this still accurate?]

From this top level directory will descend each Submitter's test file
hierarchy.  The The test file hierarchy is the hierarchy of test files
submitted
by the Submitter.  The presence of a hierarchy assumes that the Submitter
does not
want to collect all test files in a single subdirectory.


3. The Submitter is welcome to arrange its subdirectories as it wishes.

The Committee will preserve the directory structure of the submitted
stylesheets
and data The Committee will collect all the files and make them
available in the final collection, mirroring the
Submitters' subdirectories.  Correct output files and parameter-setting
files will be
located in specified directories of the mirrored file hierarchy. [Can we be
more specific?]


4. The Committee suggests that the Submitter should give
each test a unique identifier as well.

This guideline reinforces that all test cases must be uniquely identified.

A Submitter's submitted test case, if not rejected based on the
review policy criteria, will be published as the committee's final test
case.


5. The test scope will be identified by the Specification
version and date.

As the W3C Recommendations evolve, a particular test may
not apply to all versions.  The test suite will contain
pointers to the parts of the Specification containing the
pertinent sentences.  When a processor fails a case, a
Submitter will be able to use the citations to find a
sentence in the Specification that the processor violated.

6. The test scope will also be identified by the modified
date of the errata document

W3C Recommendations have associated errata documents that
are published to correct misrepresentations in the text of
the documents.  It is a summary of issues identified and
resolved by the Committee.  Multiple errata documents may
be published each with a date.

7. In cases where the Submitter is resubmitting new tests along with
tests that are unchanged since the previous submission, the Reviewers
will filter out the new tests from the old.  [Is this correct?  How will
Reviewers do this?]

*************end****************







[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC