OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oiic-formation-discuss] Re: draft proposal 0.3 - Deliverables


2008/7/30  <david_marston@us.ibm.com>:

> The ODF Interoperability & Conformance TC will create some new material to
> aid conformance testing, but they could also gather material from outside
> sources into a collection. I would suggest that Part 1b of the charter be
> modified from "produce materials" to "gather and produce materials" to
> ensure coverage of all possibilities. My experience with XSLT testing was
> that the TC gathered test cases (details on "test cases" below) from
> contributors and originated documents about testing methods. That may be a
> good division for the ODF I&C TC to follow, but the charter should not be
> that precise.

That makes the group into a software repository management role?
That smells quite differently to the ideas gathered from this group David?

I like the idea of 'gathering material' (when translated into plain English).
I'll add that to the wiki pages. basically a continuation of what this
group has been doing over the last 7 weeks.

Added to our deliverables.
14. Collate and publish conformance and interoperability testing information
    resources.

Is that OK with everyone? Broad, since such info can come from
outside the group as well as inside.


And I'm really unsure about quoting XSLT testing as a good example?
Its work is near invisible in XSLT circles?

>
> The XSLT Conformance Testing TC planned to deliver the following test
> materials:
> Test Case Metadata, as an XML file

Any definitions? meta to what? Do you mean a test specification?



> Test case inputs
> Reference outputs that show the correct behavior of a test case

-1
Rationale. If we define fixed expected data test cases can be built
to pass it.
  If we leave the detail up to the implementer then the vendors
are left guessing as to what the data might be. Anyway, quite
a bit of the 'expected' output is application behaviour, view changes,
so that's impractical in many cases.


> An outline for an Implementation Conformance Statement that would accompany
> an implementation to be tested

A test report? We ran these tests, this is the pass/fail list and our
excuses why these failed?

http://www.antennahouse.com/xslfo/axf4fo.htm is a fair example
that I've seen. If that's the class of document you mean.
Basically an association between the test identity and the results.



> (We didn't want to write Test Assertions, but we experimented with
> post-processing the specs to isolate statements. It wasn't as precise as we
> needed. A modern-day Conformance TC might want to deliver a set of test
> assertions. See the activities of the OASIS Test Assertion Guidelines TC [1]
> for the latest thinking on this subject.)

I've written a stylesheet that extracts a skeleton test requirement statement
for every para. Some cases it needs to go lower (sentence level) but generally
it suffices, due to the regularity of the spec.

I don't know what you mean by test assertion. Would you explain please.



>
> In the above enumeration, the test case inputs and the reference outputs are
> the constituents of test cases.

Which sum to a test specification? "A specification of what tests are
needed to verify conformance to a requirement (the ODF spec in this case)"

 Inputs can be shared across test cases if
> the metadata specifies re-use. In effect, the metadata is what identifies
> each test case, and so determines the number of cases in the collection.
> Since XSLT is a processor, the test regime is: provide the inputs specified
> for the case, invoke processing, capture the output (or error message), and
> then compare the actual result against the reference result (or error
> message specified in the metadata) using the comparitor specified for that
> case in the metadata.

Knowing XSLT and partially knowing ODF, I don't think experience in the former
is easy to carry over to the latter David. Quite different beasts,
quite different specs?
James Clark /Mike Kay vs Oasis ODF committee ? XSLT is fully determinant.
ODF is very  loose. XSLT has an easily testable output (XML), ODF has both
XML and the view to cater for.



Comparing equal indicates conformance; comparing
> unequal indicates non-conformance. Other classes of product may have simpler
> or more complicated testing regimes. All the test materials provided were
> platform-independent (XML or HTML).

(Or instructions for a tester?)
+1 for platform (and vendor) independent.


>
> Test case metadata can also provide filtering information, to avoid running
> (or at least evaluating) irrelevant cases. Test cases are annotated if they
> are not universal, which could be due to a mismatch on any of the dimensions
> of variability from [2] or also because they only apply to certain versions
> of the spec. To determine relevance, the implementation being tested needs
> an Implementation Conformance Statement, which will specify the choices made
> on the various dimensions of variability. See Part 9 of [3] for a discussion
> of these statements. In the ODF case, an implementation would probably
> specify something about which features and/or profiles it implements, plus
> the spec version and what class of product it is.

Is this an example of your 'test case metadata'? Ancillary information
about testing?


-1. You've gone beyond ODF here. Move to strike :-)
I'm really not sure how to address tests a vendor knows will fail
(e.g. because they haven't implemented para x.y.z. It needs addressing
but I'm unsure how best to do it. A control file mapping spec clauses to
implemented features, used to control test run (and impact results) is
perhaps an approach.)


>
> One more possible use of test case metadata is to segregate tests that are
> in a preliminary state or that have been questioned. An assessment run might
> exclude such cases, while developers would want to run the preliminary ones
> to plan for future assessments. Thus, the TC will need to define a set of
> status codes for test cases. These codes are usually pretty similar from one
> conformance testing group to the next, but vary depending on the policies
> about reviews and the like.

+1 Like it.


  <d:simplesect xml:id="tests.para3.1.4p1">
      <d:title>Section 3.1.4 para 1</d:title>
      <d:para role="link">
         <d:link xlink:href="&specx;3.1.4">spec</d:link>
      </d:para>
      <d:para role="spec">The &lt;dc:subject&gt; element specifies the
s ....</d:para>
      <d:para role="test">-</d:para>
      <d:para role="remark">Specified test in dispute </d:para>
      <o:test class="none"/>
   </d:simplesect>


This format could be used for the "I've not implemented it" choices
that a vendor makes.    <o:test class="unimplemented"/>


Added to the wiki.

4.Provide a means of registering and reporting omitted tests due to
  non-implementation or a test clause in dispute.

(Note 4. was a duplicate of 6, now re-used). Does that capture the idea?





>
> The other kind of deliverable is a guideline document, such as one that
> tells a test lab how to finish the setup so that the materials obtained from
> the TC are combined with local resources, resulting in an executable test
> environment. Drafts of the charter for this TC have mentioned "Conformance
> Test Requirements" or a "Conformance Assessment Methodology Specification"
> as a deliverable. That document would be the guideline addressed to the test
> lab.


We're using different language David.
IMO a test spec is as defined above. A specification for what tests are needed.
This sounds as if you're drifting into 'how to apply the tests' which is
the next stage downstream.

Plain English version please?
1. You're making a lot of assumptions (test lab?)
2. IMO its not our job to tell a tester HOW to do his|her job.
3. The test environment isn't our concern



It needs to address executing test cases and evaluating the result of
> each. (Terminology check: executing the test gives a "result" such as a
> rendered document. After comparing the actual result against the reference
> result, one then has an "outcome" such as Pass, Fail, Inapplicable, etc.)

The expected result needs comparing with the actual and recording
(be this manual or automated), +1



> The guideline must be very clear about which materials from the TC must be
> used in order for the lab to say it ran the OASIS ODF test suite. Likewise,
> it should be very clear about what software the test lab will need to write
> or obtain elsewhere. In particular, it should address the requirements for
> comparitors.

No idea what this is all about. Clarification please.


>
> A comparitor is typically a software module that compares two files, taking
> into account only those aspects that are relevant. In other words, when the
> equivalence must be something other than pixel- or bitwise-perfect, you use
> a comparitor that has the correct tolerance. For example, nearly all XML
> comparisons require equivalence of the XML InfoSets (see [4]) rather than
> character-by-character equivalence. Comparitors are closely intertwined with
> canonicalizers, which have been mentioned earlier on this list. For some
> kinds of output, the comparitor must be a human being. Other situations may
> start as a human compare and become susceptible to automation in the future,
> so it is best to describe the comparitor by its function rather than simply
> say "manual" or something equally simplistic. If this TC gains enough
> momentum, it may stimulate the creation or enhancement of open-source
> comparitors by others, which would benefit conformance testing in general.

I get your drift but I think you're out of order.
The job of the TC is to specify objective tests.
If an implementer is unable to make that measurement then the action
should be to come back to the TC with a bitch about the test,
or for lesser complaints to identify and publish a means of comparison
suitable for the test. Hence I think this is downstream of the TC?

Brings up a good point though. Deliverable.

15.A simple process for implementers to raise and have resolved issues
   over unclear or missing tests.

Is that OK with people? How to get X resolved by an implementer.




>
> Other guideline documents could be addressed to those who would submit test
> cases (see [5] for an example) or those who want to interpret results
> reported by a test lab. This message is not intended to be an exhaustive
> list of all reasonable deliverables for this TC, just a refinement of the
> possibilities regarding test materials and guidelines for their use.


How long you been with IBM David? I guess you're too used to dumping work
on the IBM test labs :-)
I envisage these being implemented in pieces by OSS devs. If IBM or Sun
do this work it wouldn't be received well IMO.


Thanks though. Good food for thought.

I note you didn't respond to my requests for clarification on your last email?

Please respond to the questions in this email.

regards



-- 
Dave Pawson
XSLT XSL-FO FAQ.
http://www.dpawson.co.uk


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]