OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tag-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Reasons to wait on convening a TC



I agree with some of the more outspoken participants on this list that we are converging on what could/should be done, and what does not have to be done, but I think we have more converging yet to do, and I will comment on some issues below. Just as important, we need better information about whether specs such as SBVR [1] are good enough. Perhaps a special profile of SBVR for test assertions (TAs) will be all that's needed.

There are several people on this list who worked on the OASIS Conformance TC[2] and/or the W3C QA documents[3], and I would like to see remarks from more people!

Points where I think we are well converged:
1. The TC, if one is needed and convened, would specify a way to state TAs, given the definition [4] cited by Jacques from prior work.
2. The main target user of a specification for TAs would be a document editor for specs of the type commonly issued by OASIS, W3C, OMG, etc.
3. If a spec is published on this topic, it should provide a skeletal structure for a generic TA, as a fill-in-the-blanks guide. Another way of stating this is that the spec would provide a model of a well-formed TA.
4. In some way, TAs are normative "statements" that establish testable constraints of a spec. Current specs often contain many solid, formal, testable sentences in their prose, so TAs need to present significant added value.

Points where we might be converging on the conclusion that these are NOT IN  SCOPE:
1. Defining test case metadata. Several developments are occurring in this area.
2. Addressing execution of tests is only in scope for the notion of testing a given TA.
3. Notwithstanding (2), there is a useful parallel between the preconditions or setup for a test case and the contingencies of a TA. If the two can be tied together, it supports automated generation of test metadata from the TAs.
4. Profiles/levels/modules/discretionaries have already been done (see Spec Guidelines and ViS at [3]), so TA work should use this prior work as a foundation rather than reinvent the Dimensions of Variability.

Points where convergence seems likely, if not already achieved:
1. There needs to be synchronization between the TAs and the prose of the spec, such as by one deriving from the other.
2. Testing can be automated with a test harness driven by platform-independent test case metadata. TAs should be cited by the metadata. The TA spec might make a few other mild assumptions about the test harness.
3. There is no need to try to limit the number of TAs for any given spec. Assume that automation will be involved if the number is large.

The question of representing TAs in XML is still open, apparently. I was astounded when Dave Pawson wrote on 11/27/2006:
>I really don't see that mandating XML as the way in which tests are expressed
>is useful? For some it will be that the XML will be an unecessary wrapper
>for simple text.
Perhaps Dave and others are unaware of how well XML has helped with test case metadata for XSLT/XPath/XQuery testing. (The XQuery test suite [5] has a catalog of thousands of test cases.) There are two kinds of benefits: filtering of the collection down to relevant subsets, and re-purposing the data by XSLT transformations. Keep in mind that XSLT can produce a text file as output. For test cases, this means that the test case metadata can be transformed into a platform-specific script for sequential execution of a set of cases. For TAs, I think that well-designed markup might support "joins" of two or more TAs that eventually give the specs of what individual test cases do. In some cases, the specs could be further transformed into actual inputs for testing.

Even if we are unsure of the state of the art in automated reasoning over a set of premises, we can assume that someday it will be possible to process a set of TAs by software and obtain further value. The further value might be about test coverage or test cases needed, or it may be about consistency in specs. If we design TAs to be amenable to machine processing (e.g., by expressing them in XML), then there is hope that early efforts at TAs will become more valuable in the future as new tools emerge.

In addition to the workload imposed on the TC that would try to develop this specification, we must keep in mind that attempts to compose conformant TAs would increase the workload on the various TCs and WGs that attempt to use it. That's why my earliest remark in this forum was that the TAs have to contribute enough value to be "worth it" for the WG that uses them. I think such value can only be realized if the TAs are usable in automated testing.
.................David Marston

[1] http://www.omg.org/docs/dtc/06-03-02.pdf
[2] http://www.oasis-open.org/committees/documents.php?wg_abbrev=ioc
[3] http://www.w3.org/QA/Library/
[4] Test Assertion: a Test Assertion is a statement of behavior for an implementation: action or condition that can be measured or tested. It is derived from the specification’s requirements and bridges the gap between the narrative of the specification and the test cases. Each test assertion is an independent, complete, testable statement on how to verify that an implementation satisfies a conformance requirement.
[5] http://www.w3.org/XML/Query/test-suite/

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]