OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Re: Questions on the Submission and Review Policies


At 01/06/20 22:34 -0700, Cris Cooley wrote:
>Questions (please type your answer after the "A:" before the square
>bracket).
>
>[Q: Has anyone from the committee submitted anything in addition to these
>bullets?  (I haven't seen anything, so I assume no...)

A: Not that I know of

>1. Submission Policy (bullets with questions)
>
>[Q: What suggestions do you have for an introductory paragraph?

A:
  - invitation to submit candidate test cases for consideration
  - full text of the address of where the policy document will be 
maintained on the web site (in case it is quoted out of context or has been 
updated on the web site)
  - brief summary of the objectives of the committee and the objectives of 
submission

>The following ideas were captured regarding submission process and practice:
>
>   - prefer atomic tests for 1.0
>[Q: What are atomic tests?  Who prefers them?  Prefers them to what?

Atomic tests are tests that cover very specific individual issues, 
contrasted to "molecular" tests that may test the combination or 
interaction of individual issues in a more complex scenario.

The committee prefers obtaining focused atomic tests over molecular tests 
that may test more than one issue at a time (because a failure in a 
molecular test may not easily indicate the nature of what is specifically 
wrong, whereas a failure in an atomic test will be more easy to nail).
]

>Q: What does 1.0 refer to?  The test suite?  Is this defined?

Our first work product - version 1.0 of the OASIS XSLT/XPath Conformance 
Test Suite.

>     - target specific language issues, not composite issues
>[Q: What language?  XSLT?

Yes, and XPath.

>Q: What are composite issues?

See discussion above regarding interactions between features or functions 
used simultaneously in a "molecular" test.

>     - consider others later
>[Q: Other what?  issues?  What other issues?  When should they be
>considered?

Non-atomic tests that may be considered in a future revision of the work 
product.

>What boundaries exist on what we should consider and when we should consider
>it?

Only the prose of the submission policy.  This will hopefully prevent a 
number of out-of-scope tests from being submitted.  A committee member may 
choose to reject a contributed test if it is judged as not being atomic 
"enough".

>   - committee reserves right to exclude any test submitted
>[Q: What possible reasons might the committee have for exclusion?

See review policy regarding judging eligibility of a test; the reasons need 
not be copied in the submission policy document, only refer to the review 
policy document.

>Is there
>any
>formal process for notifying the submitter of exclusion?

I personally don't think this is necessary ... we will have enough to do 
just reviewing and packaging what we get.

>   - prefer no "should" decisions for 1.0 suite of tests
>[Q:What does this mean?

Certain areas of the Recommendations are guidelines ("should") rather than 
requirements ("must").

>How does this impact submitters?

Any test submitted for a "should" guideline will probably be judged "out of 
scope" by the committee so hopefully the submitter will reduce our work by 
not sending it in the first place.

>     - target only explicit processor choices, not unspecified areas of
>Recommendations
>[Q: Does this refer to W3C Recommendation on XSLT?

Yes, there are areas of both Recs that do not specify what a processor need 
do, thus we cannot test for what they actually do.  There are other areas 
where the processor is given a choice regarding how it behaves.  The 
remaining areas are unconditional required behaviours.

>Does this mean the test
>suite will only target choices made by the XSLT processor vendors?

The suite will differentiate test cases based on choices made by the 
vendors, thus, we need to know if a test corresponds to a particular choice 
made available to the processor (will be enumerated in the information 
included with the catalogue document model).

>Why?
>(limitation on scope?  other?)

Yes, to limit our work to a meaningful, useful, auditable and (hopefully!) 
manageable effort.

>   - test identification:
>[Q:What is test id?

A unique identification of a test case provided by the submitter and 
utilized by the committee.  The scope of uniqueness is bounded by the 
submitter's collection and will be unambiguously modified across the set of 
collections.

>     - use test file hierarchy
>[Q: What is the test file hierarchy?

The hierarchy of test files submitted by the submitter (assuming the 
submitter does not want to collect all test files in a single subdirectory.

>Use it for what? (for id?)

Yes.

>Who should
>use
>it?

The submitter defines it and the committee uses it.

>     - base hierarchy on root directory of submitter
>[Q: What hierarchy?  (Assume "base" is a verb...?)  By "root directory" you
>must mean the directory designated by submitter on their server (regardless
>of whether it is actually the root of any server drive) ?

Not quite ... the collection of files will be packaged in a number of 
subdirectories, the apex of which is the root directory of the 
submission.  There is no dependency at all on the directory from which the 
files are prepared by the submitter.  We will probably request a ZIP format 
relative to the root directory and explicitly excluding any ancestral 
directories of the root directory that may exist on the submitter's system.

The id would be based, therefore, on the subdirectory structure descendent 
from the root (not ascendent).

>     - submitter welcome to arrange subdirectories as they wish
>[Q: Is there a connection between the test file hierarchy & the subdirectory
>hierarchy?  Are they the same?  Different?  How?

They are the same.  We won't know how or why the submitter chooses 
subdirectories.  We will collect all the files and make them available in 
the final collection produced by the committee, probably mirroring the 
submitters' subdirectories.

>     - each test will have a unique identifier as well
>[Q: What is the identifier?  As well as what?  By test, do you mean each
>test file or each submission or test performance, or something else?

This is probably a redundant requirement ... I think this is just 
reinforcing that all test cases submitted must be uniquely identified.

>     - final test identifier will be concatenation of submitter and test ids
>[Q: What is the difference between the test and the final test?

A submitter's submitted test case will be published as the committee's 
final test case if not rejected based on the review policy criteria.

>By
>"identifier" do  you mean file name?  element name?  something else?

The identifier discussed above.

>   - test scope will be identified by Recommendation and date
>[Q: What is test scope?

To which Recommendation the test case applies (since the Recommendations 
evolve, a particular test may not apply to all versions of the 
Recommendations).

>Does "Recommendation" = W3C xslt Recommendation?

And XPath.

>date of what?  Submission?  Other?

Recommendation.

>-       of recommendation itself
>[Q: What does this mean?  (I don't see what it connects to...)

It is a sub-bullet of scope ...

>Is this
>talking about the W3C xslt Recommendation?  (not capitalized = something
>different?)

W3C XSLT and XPath by their "number", e.g. XSLT 1.0, XSLT 2.0, XPath 1.0, etc.

>-       of modified date of errata document
>[Q: What does this mean?

Recommendations have associated errata that are published to correct 
misrepresentations in the text of the documents.

>What is it connected to?

The Recommendations.

>What is errata document?

A summary of issues identified and resolved by the committee.

>Why is the date modified?

Multiple errata documents may be published ... each with a date.

>[Q: What else should be included in the Submission Policy?

That's what this process will hopefully reveal.  As the policy gets 
polished, committee members and candidate submitters may have questions and 
comments.

>2. Review policy
>
>[Q: What suggestions do you have for an introductory paragraph?

Help submitters understand what the committee is doing and why candidate 
tests they submit may be rejected by the review process.

Help committee members (new and old) with guidelines for their review tasks 
to remove interpretive behaviours and (hopefully!) result in equal 
application of review policy criteria by all involved, thus producing a 
consistent and quality work product.

>   1 - judge the eligibility of a test by:
>[Q: What is eligibility?

Whether or not a candidate test submitted by a submitter ends up in the 
test suite as published by the committee.

>       - accuracy of test
>[Q: What does accuracy mean?

That the test case actually tests what the submitter states the test case 
tests.

>What is the baseline for determining it?

The reviewer's interpretation of the Recommendation, and if necessary, the 
opinion of the committee as a whole, and if necessary, the definitive 
assessment by the W3C Working Group.

>What
>is
>the means for measuring it?

Understanding the Recommendation and recognizing appropriate behaviour on 
behalf of the test case.

>       - scope of test
>[Q: (Already asked) what is scope?

Answered above.

>       - clarity of test
>[Q: What is clarity?

The aspect being tested is clearly described with the anticipated results 
acceptably explained.

>How is it measured?

Assessment by two or more committee members, neither of which is the 
submitter of the test being measured.  Only the assessment of a single 
member is required for the test to be included in the draft release.

>       - clarity of aspect of recommendation being tested
>[Q: What is clarity of aspect?  Does this refer to W3C Recommendation?

Yes.

>       - should/shall use in the recommendation
>[Q: What does this mean?  What recommendation?  Who should/shall?

The test clearly addresses a "shall" requirement and not a "should" 
requirement.

>       - is the test testing a discretionary item?
>[Q: What is a discretionary item?  Defined where by whom?

There is a catalogue of discretionary items that is being developed and is 
near completion.  This part should refer to the catalogue of discretionary 
items as something that each committee would develop for the technology 
being tested.

>       - atomic/molecular nature of the test
>[Q: What does atomic mean? (asked above) What does molecular mean?  What is
>meant by nature (specifically)?

Answered above?

>   2 - judge each eligible test through a process
>[Q: Who is judging?

Members of the committee.

>Who is being judged?

A test case is being judged.

>   What process?

This document.

>Where defined?

Here.

>       - run thorugh multiple processors
>[Q: xlst processors?

Yes.

>Whose?

Any of the popular ones general thought to be well implemented.  Consensus 
(or general agreement) is sufficient.  Disagreement should trigger more 
testing.

>Which one is the benchmark or baseline, or is
>there one?

There is no reference implementation ... consensus is being sought here.

>         - any differences imply examination by committee
>[Q: Differences between what & what? (submitter expected output and user
>actual output?)

Yes.

>What does this mean "imply"?

Cause/effect.

>Who is the committee?  This
>xslt conformance committee?

Yes.

>What sort of examination?

All members come to their own conclusion and we go from there.

>         - consensus opinion to accept the test, reject the test, or defer
>deciding on the test while the issue is forwarded to the W3C for
>clarification
>[Q: What does this mean

What is confusing?

>         - possible actions:
>         - reject test and update errata and errata exclusion
>[Q: What does it mean to reject a test?

It is excluded from the published collection.

>In what form is rejection
>communicated?

No plans ... any suggestions?

>What is included in the rejection message?

Not defined.

>What errata?  What exclusion?

Our control file dictating which submitted tests are excluded from the 
published collection.

>           - reject comment with advice to go to W3C if the submitter is not
>convinced
>[Q: What does this mean "reject comment"?

Probably a typo, probably "reject test" with advice to go to W3C if 
submitter is not convinced of our assessment.

>Who advises the submitter?

No plans in place ... any suggestions?

>Convinced
>of what?

They think the test is accurate, we (as a committee) agree the test is not 
accurate and the Recommendation is clear enough that we needn't bother the 
W3C with an interpretation issue.  Requires consensus (was that absolute or 
near consensus?).

>           - forward to W3C for clarification
>[Q: What is forwarded?

The test case.

>By whom?

Our committee.

>Clarification of what?

Who is right: submitter or committee?

>       - accommodate external comment from the community at large
>[Q: Who will make this accommodation?

Volunteer from the committee.

>How?

Respond publicly to publicly made comment.

>Comment on what?

Our assessments of the tests.

>Who is the
>community-at-large (specifically)?

Anyone interested in our tests.

>         - committee publishes consensus opinion of response to comment with
>justification from Recommendation (not just precedence of how a processor
>has acted)
>[Q: none]
>
>[Q: What else should be included in the Review Policy?

Any other suggestions for fairness and ideas for making it generic for 
other committees.

>   3 - game plan for tests
>       - a member will report to the list the hierarchy of tests undertaken
>for comparison with multiple processors
>       - tally of tests will tracked on a visible web page for the committee
>       - members report that all tests in a given hierarchy have been
>examined, incl. a summary of findings of tests not to be included in the
>resulting suite
>       - a given hierarchy is not considered complete until reports from at
>least two members have been submitted
>
>   4 - public review
>       - initial suite of a very small set of files will be used to test
>procedures and scripts and stylesheets
>       - committee will publish draft work periodically, starting with very
>small set
>       - committee will solicit comments on usability of the product
>       - committee will publish a disposition of comments
>       - committee progresses on the testing of files until all hierarchies
>covered

I assume, Chris, you don't need comments on the above.

One last issue reflecting decisions by the committee during the June 
meeting: if it is possible to write both policies independent of specific 
references to XSLT and XPath, these may be wholly used (or at least 
plagiarized) by other committees.  This is but one aspect of the 
"genericization" of our work that we discussed ... let's go through this 
painful exercise only once.

Thanks, Chris!

..................... Ken

--
G. Ken Holman                      mailto:gkholman@CraneSoftwrights.com
Crane Softwrights Ltd.               http://www.CraneSoftwrights.com/s/
Box 266, Kars, Ontario CANADA K0A-2E0     +1(613)489-0999   (Fax:-0995)
Web site:     XSL/XML/DSSSL/SGML/OmniMark services, training, products.
Book:  Practical Transformation Using XSLT and XPath ISBN 1-894049-06-3
Article: What is XSLT? http://www.xml.com/pub/2000/08/holman/index.html
Next public instructor-led training:      2001-08-12,08-13,09-19,10-01,
-                                               10-04,10-22,10-29,02-02

Training Blitz: 3-days XSLT/XPath, 2-days XSLFO in Ottawa 2001-10-01/05



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC