OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oic] Groups - svn-metadata-barth.zip uploaded


bart.hanssens@fedict.be wrote on 03/25/2009 08:58:14 AM:

>
> As a convenience, I've uploaded a Zip containing a proposal for a first
> series of very simple conformance test scenarios (both for ODF 1.1 and 
ODF
> 1.2)
> 
> The scenarios themselves start out as a simple text file. Running a 
script
> they get converted into an XML(for future processing), and an HTML file 
for
> easy viewing.
> 
> Although this would be a symbolic act, it would be nice if we could
> validate the test cases and perhaps move them forward to a committee
> approved work. Doing so, we could also outline the formal steps, and
> integrate them into a "conformance assessment metholodogy spec"
> 
> (Especially with ODF 1.2 coming up, it would be nice to at least have a
> symbolic test set available that can grow as we proceed...)
> 
> Feedback is mostly appreciated.
> 
> 

Generally I like this technical approach.  We encode the test metadata in 
XML and use that to generate detailed HTML instructions.

Did you have a tool to create the ODF documents?  Or were they created 
manually?

But I think we should be explicit about the significance of each test 
case.  In particular, what is the prescription level?  Is something a 
requirement ('shall'), a recommendation ('should') or something else? 
Also, what is the target of the test case, a document or an application? 
(Noting that ODF defines conditions both on documents as well as 
applications).  And what is the source or authority of the provision?

I don't think our test cases are limited to explicitly stated requirements 
in the ODF standard.  We can create test cases based recommendations in 
the standard, provisions of external standards that we reference, 
recommendations from Committee Specifications, such as the ODF 
Accessibility Guidelines, etc.  We may even create our own guidelines and 
create test cases based on those. We're a TC.  We have that ability to do 
that. We can be our own authority for a test case. However, we need to be 
very careful to track the authority for every test case that we create. 
Otherwise we will have difficulty noting which subset of our test cases 
(ones where the authority is the ODF standard) must pass in a conformance 
test versus the broader set of test cases that may pertain to an 
interoperability test.

I think this is an area which could benefit from the use of formal test 
assertions.  The OASIS Test Assertion Guidelines TC has come out with a 
new CD that is worth reading over (only 37 pages long):

http://www.oasis-open.org/committees/download.php/31076/TestAssertionsGuidelines-cd3-1-0-0.pdf

The general outline of a Test Assertion is:

TA_id: the ID of the test assertion
Target: the target of the assertion, in our case usually a document or an 
application
Normative source: the authority for the test assertion
Prerequisite: What determines whether this test is relevant in a given 
situation
Predicate: the expression to evaluate
Prescription level: mandatory, preferred or permitted

For example I could take your dc:creator read test and reverse-engineer a 
test assertion like:

TA_id: metadata-dc-creator
Target: an ODF Consumer
Normative source: ODF 1.1, Section 3.1.7
Prerequisite: The source ODF document contains a meta.xml XML stream 
containing a dc:creator element
Predicate: the ODF Consumer is capable of indicating to the user the 
document author and the author indicated corresponds to the value stored 
in dc:creator.
Prescription level: preferred

(This is for sake of illustration.  I'm not taking a position on whether 
or note ODF 1.0 actually states this as a provision).

It may be possible to add this additional information to the XML you 
already have.  I'm not sure.  I think we're mainly missing the explicit 
statement of prescription level and pre-requisite.  On the other hand, 
test assertions are intended to provide a level of indirection between the 
text of the standard and the actual test case, and I can see how this can 
be a useful abstraction.  So I would not be opposed to tracking test 
assertions on their own, either as text or in a simple XML encoding.  We 
could then discuss and approve the test assertions as a TC.  The 
translation of each test assertion into a specific test case should be 
straight forward and mechanical and may not elicit as much debate.

-Rob


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]