OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: RE: Questions on the Submission and Review Policies



> From: Cris Cooley [mailto:ccooley@overdomain.com]
> Sent: Wednesday, June 20, 2001 10:35 PM

Hope you had a great vacations.  I am jelous but I just got back from
walking my dog on the beach so I am not too jelous.
Lynda
714 368 0985
>
> [Q: Has anyone from the committee submitted anything in addition to these
> bullets?  (I haven't seen anything, so I assume no...)
> A:   No, not that I know of ]
>
>
> 1. Submission Policy (bullets with questions)
>
> [Q: What suggestions do you have for an introductory paragraph?
> A:  Encourage submission from anyone
      Microsoft, Lotus / IBM, and Michael Kay so far
	Check the website for more info
	The more test we get the more complete job we can do  ]
>
>
> The following ideas were captured regarding submission process
> and practice:
>
>   - prefer atomic tests for 1.0
> [Q: What are atomic tests?  Who prefers them?  Prefers them to what?
> A:   Test a specific part of the spec - not a combination of testable
statements
	Atomic testing is preferable because you immediately pinpoint the problem
not one or the other or the interaction
 ]
> Q: What does 1.0 refer to?  The test suite?  Is this defined?
> A: 	The version of our test suite ]
>
>     - target specific language issues, not composite issues
> [Q: What language?  XSLT?
> A: yes & XPath
> Q: What are composite issues?
> A: 	the combination of two specific testable statement in the spec]
>
>     - consider others later
> [Q: Other what?  issues?  What other issues?  When should they be
> considered?
> What boundaries exist on what we should consider and when we
> should consider
> it?
> A: consider tests with combinations of testable statement and their
interaction later	]
>
>   - committee reserves right to exclude any test submitted
> [Q: What possible reasons might the committee have for exclusion?
>  Is there
> any
> formal process for notifying the submitter of exclusion?
> A: not atomic, not part of the spec but an extension, out of scope ie
deals with output
the submission policy tells them we may not use them all or any for that
matter]
>
>   - prefer no "should" decisions for 1.0 suite of tests
> [Q:What does this mean? How does this impact submitters?
> A:	should do in your compiler but not required]
>
>     - target only explicit processor choices, not unspecified areas of
> Recommendations
> [Q: Does this refer to W3C Recommendation on XSLT?  Does this
> mean the test
> suite will only target choices made by the XSLT processor vendors?  Why?
> (limitation on scope?  other?)
> A:	is some areas the processor can do whatever they desire.  We will let
vendors give us their answers to discretionary areas but this is different]
>
>   - test identification:
> [Q:What is test id?
> A:	unique id provided by submitter specified in catalog I think]
>
>     - use test file hierarchy
> [Q: What is the test file hierarchy?  Use it for what? (for id?)
> Who should
> use
> it?
> A:	COmmittee]
>
>     - base hierarchy on root directory of submitter
> [Q: What hierarchy?  (Assume "base" is a verb...?)  By "root
> directory" you
> must mean the directory designated by submitter on their server
> (regardless
> of whether it is actually the root of any server drive) ?
> A:	file structure - root and subdirectories with ids]
>
>     - submitter welcome to arrange subdirectories as they wish
> [Q: Is there a connection between the test file hierarchy & the
> subdirectory
> hierarchy?  Are they the same?  Different?  How?
> A: same - no need to make vendors conform	]
>
>     - each test will have a unique identifier as well
> [Q: What is the identifier?  As well as what?  By test, do you mean each
> test file or each submission or test performance, or something else?
> A: 	same unique ID as above]
>
>     - each submitter will be assigned a unique identifier
> [no question ]
>
>     - final test identifier will be concatenation of submitter
> and test ids
> [Q: What is the difference between the test and the final test?  By
> "identifier" do  you mean file name?  element name?  something else?
> A: 	same as above  ie Lotus1.2.3]
>
>   - test scope will be identified by Recommendation and date
> [Q: What is test scope?  Does "Recommendation" = W3C xslt Recommendation?
> date of what?  Submission?  Other?
> A: 	which version of XSLT & XPath recommendations and erata ]
>
> -	of recommendation itself
> [Q: What does this mean?  (I don't see what it connects to...)  Is this
> talking about the W3C xslt Recommendation?  (not capitalized = something
> different?)
>
> -	of modified date of errata document
> [Q: What does this mean?  What is it connected to?  What is
> errata document?
> Why is the date modified?
> A: Clarificaitons of the spec etc	]
>
> [Q: What else should be included in the Submission Policy?
> A:    We should review at next meeting with your pose]
>
>
> 2. Review policy
>
> [Q: What suggestions do you have for an introductory paragraph?
> A:  Highlights of submission review process.  fact that at least 2
committee members check off on each test  ]
>
>   1 - judge the eligibility of a test by:
> [Q: What is eligibility?
> A: included in suite - not excluded as out of scope etc.	]
>
>       - accuracy of test
> [Q: What does accuracy mean?  What is the baseline for
> determining it?  What
> is
> the means for measuring it?
> A:	test does what it say it tests.  committee members concur]
>
>       - scope of test
> [Q: (Already asked) what is scope?
> A: (above)	]
>
>       - clarity of test
> [Q: What is clarity?  How is it measured?
> A:	committee member assess does what intended to do.  Clear enough to be
easliy assessed]
>
>       - clarity of aspect of recommendation being tested
> [Q: What is clarity of aspect?  Does this refer to W3C Recommendation?
> A:Yes	]
>
>       - should/shall use in the recommendation
> [Q: What does this mean?  What recommendation?  Who should/shall?
> A:	]
>
>       - is the test testing a discretionary item?
> [Q: What is a discretionary item?  Defined where by whom?
> A: the spec defines discretionary items the vendor can provide answers on
how they handle these items.  See website for discretionary behaviors doc	]
>       - atomic/molecular nature of the test
> [Q: What does atomic mean? (asked above) What does molecular
> mean?  What is
> meant by nature (specifically)?
> A:	same - one specific requirement of the spec]
>
>   2 - judge each eligible test through a process
> [Q: Who is judging?  Who is being judged?  What process?  Where defined?
> A:committee is judging the test using this process]
>
>       - run thorugh multiple processors
> [Q: xlst processors?  Whose?  Which one is the benchmark or
> baseline, or is
> there one?
> A:	we will use several ie top 5 or 10.  If all behave that same good
otherwise analyze]
>
>         - any differences imply examination by committee
> [Q: Differences between what & what? (submitter expected output and user
> actual output?)  What does this mean "imply"?  Who is the committee?  This
> xslt conformance committee?  What sort of examination?
> A: This XSLT conformance committee will decide as experts and try for
concensus	]
>
>         - consensus opinion to accept the test, reject the test, or defer
> deciding on the test while the issue is forwarded to the W3C for
> clarification
> [Q: What does this mean
> A: we will not interpret the spec - leave that to the w3c	]
>
>         - possible actions:
>         - reject test and update errata and errata exclusion
> [Q: What does it mean to reject a test?  In what form is rejection
> communicated?
> What is included in the rejection message?  What errata?  What exclusion?
> A: I don't know	]
>
>           - reject comment with advice to go to W3C if the
> submitter is not
> convinced
> [Q: What does this mean "reject comment"?  Who advises the submitter?
> Convinced
> of what?
> A: Submitter is not notified	]
>
>           - forward to W3C for clarification
> [Q: What is forwarded?  By whom?  Clarification of what?
> A:	vague areas of the spec]
>
>       - accommodate external comment from the community at large
> [Q: Who will make this accommodation?  How?  Comment on what?  Who is the
> community-at-large (specifically)?
> A:	public meeting, XSL & XMLDev mailing lists, invited experts]
>
>         - committee publishes consensus opinion of response to
> comment with
> justification from Recommendation (not just precedence of how a processor
> has acted)
> [Q: none]
>
> [Q: What else should be included in the Review Policy?
> A:      ]
>
>
>   3 - game plan for tests
>       - a member will report to the list the hierarchy of tests undertaken
> for comparison with multiple processors
>       - tally of tests will tracked on a visible web page for the
> committee
>       - members report that all tests in a given hierarchy have been
> examined, incl. a summary of findings of tests not to be included in the
> resulting suite
>       - a given hierarchy is not considered complete until reports from at
> least two members have been submitted
>
>   4 - public review
>       - initial suite of a very small set of files will be used to test
> procedures and scripts and stylesheets
>       - committee will publish draft work periodically, starting with very
> small set
>       - committee will solicit comments on usability of the product
>       - committee will publish a disposition of comments
>       - committee progresses on the testing of files until all hierarchies
> covered
>
>
> ~~~~~~~~~~~~~~~~~~~~~~~
> Crisman Cooley
> Overdomain, LLC
> ccooley@overdomain.com
> 805-683-0938 tel
> 805-570-5474 cel
> www.overdomain.com
> ~~~~~~~~~~~~~~~~~~~~~~~
>



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC