[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: [tag-discuss] Re: Test case metadata...
some $0.02 considerations related to tagging " assertions" vs. writing " test assertions" :
(a plea in favor of more than just tagging...)
From: Patrick.Curran@Sun.COM [mailto:Patrick.Curran@Sun.COM]
Sent: Wednesday, December 06, 2006 2:58 PM
To: Durand, Jacques R.
Subject: Re: [tag-discuss] Re: Test case metadata...
As I suggested in a recent post, I don't believe that it is always necessary to "reword" spec requirements in order to turn them into test assertions. If the spec is written in appropriately precise and normative language, it ought to be possible to treat text within the spec itself as assertions.
in some cases, the spec may appear crystal-clear on what needs to be tested and gives enough hints on how to verify the requirement But shouldn't we still recommend to extract a test assertion for the sake of consistency with the rest of TAs that are associated with this spec?
I am aware that this is more a methodology argument than a technical one, but my concern would be: often what is " crystal clear" for spec writers, may not be so for test suite writers, and spec writers may be prone to be content with " assertions" tags where a little more structured " test assertion" would really be more helpful.
For example WS-I " profile definitions" already seemed to be stated quite clearly (each requirement is a separate, numbered item etc.), but when we wrote TAs for these in a methodical way (with a test approach in mind, e.g. stating more clearly which were the artifacts under test, etc.), we were surprised of how non-obvious that exercise was, of the questioning and challenges this exercise caused to the original statements (helping increase quality) and ultimately of the difference it made for people writing ultimate test cases.
A data-point: when the W3C QA Working Group was working on the Specification Guidelines  we wanted to make sure that this document (which was itself a specification) conformed to itself (that is, that this document conformed to the recommendations contained within it).
" eat your own dog food" as some say... I guess we' ll have to, for our credibility...
Since one of our recommendations was to use test assertions, we made the effort to create an assertion list. It turned out to be a largely cosmetic exercise, involving making slight but non-critical changes to text that was already present within the spec.
This is why we defined a test assertion as "a measurable or testable statement of behavior, action, or condition [that is] contained within or derived from the specification's requirements". (In retrospect, I think "the specification's requirements" is redundant, and we could simply have said "the specification".)
 QA Framework: Specification Guidelines: http://www.w3.org/TR/qaframe-spec/
Manager, Java SE Conformance Test Development, Sun Microsystems
Durand, Jacques R. wrote:
Serm: Ah, one more lurker coming out in the open :-) Thanks for detailed feedback - that reinforces the need to make sure we all agree on the terminology. So far I see agreement on this list on the general intuitive meaning of TA: a TA is a rewording of a spec requirement in a way that is "testable" within an assumed mode of deployment for an implementation. Divergences seem to be in the details:- Test Assertion (TA) is in fact Test Requirement (TR) plus TestProfile I must confess that I never really made a difference between TA and TR... from a modeling viewpoint, isn't this "test profile" matching something like a conformance profile? i.e. grouping a selected set of TRs. In that case no real difference TA/TR. Or, do you see TR as closer to spec narrative, and TA possibly narrower, designed for a specific conformance level or profile (e.g. may cover only partially some spec requirement, if the "profile" does not need more)? On the scoping of a TAG TC activity: - I also favor a "descriptive" guideline (and data model I guess) as a decent first objective, that has value in itself and less controversial for broad adoption. - even if all agree on such a 1st step/deliverable, we still need to decide whether the TC charter should plan for [a possible] next phase that involves an adjunct for a mark-up / representation of TAs. (note: charter does not need to be precise on which purpose this mark-up would serve... could be just wrapping the descriptive for publishing) Cheers, -Jacques -----Original Message----- From: Serm Kulvatunyou [mailto:email@example.com] Sent: Saturday, December 02, 2006 4:00 PM To: firstname.lastname@example.org Subject: RE: [tag-discuss] Re: Test case metadata... Hi, have been following the discussion for a while. Yes. There are a lot of terminology mismatches around here. A number of us on this list are familiar with the terminologies in OASIS ebXML IIC Test Framework spec and the OASIS conformance definition. I saw that the W3C published note has different terminology, particularly of confusion here is the "test case metadata" (on the W3C note web page, I think, it is called just "test metadata". I will use the term test case metadata in this message since it has been used in this discussion group). Having read some software testing literatures, the terminology in the IIC specification seems more common. My understanding so far as having read all the messages in the TAG discussion is as follows. - Test Assertion (TA) is in fact Test Requirement (TR) plus Test Profile (test profile from the functional module perspective, not from usage profile perspective) in IIC, and maybe a few more additions (e.g., relationships b/w TRs). Note that TA is an element of the Test Requirement. In the IIC specs, only descriptive info is contained these entities, which is, perhaps, why a number of us cannot quite grasp the idea of auto-generate something from here including test cases or implementation code. - In the W3C note on test case metadata, there seems to be overlaps into the Test Case as defined by the IIC. This is because some elements in the test case metadata are defined to be machine processable (e.g., the Inputs and Expected Results. On this basis, perhaps it is why some of the list participants are thinking of some automations based on the TA, if it includes such elements. Note that in IIC, a test case contains everything from relationship to the TR, machine interpretable test execution steps, inputs, and verification conditions/expected result, and more. My opinion is as follows: - The big question is how far does a TA or TR spec goes, whether including just descriptive info or also machine interpretable instructions. - I think if the (business) objective of the TC is to create something that a lot of Standard TC can chew on, the TA or TR spec should include only descriptive info, i.e., it should not be much different from those already included in the IIC spec for TR. And 'yes' as one of Dave Marson's email suggested, let the test expert and/or implementer with the help of the target standard expert translates that into executable TC (as defined by the IIC). I personally think that such TA or TR spec is useful and of sufficient value b/c although it is unlikely to provide value in automation in subsequent steps it is divide the target standard into atomic features/functions that make test coverage analysis (matrix as termed by Dave Pawson) more clear. In addition, it would definitely make test case writing easier as well. - I think for standards that are very well defined via some sort of formal models, autogenerating TAs/TRs as well as TCs maybe at least partially possible. But I think that should not be the subject of concern for this TC (it should be an interest of research labs). The produce of this TC would provide a standard medium for the auto-generation algorithm to publish result. Ie. This TC focus can be only defining TA/TR data structure such that it facilitates test coverage analysis, test case authoring, and understanding of functions within the target standards. I have witnessed values and how people appreciate the availability of such descriptive TA/TR. Having said that I would recommend that we used the terms defined in the IIC spec for discussion (in which case I think we should have used the term TR in place of TA), and the length of work for this TC shouldn't be more than 6 months. Thanks for reading this long message...I think I deserve one since I have read tons of messages before sending one :). Serm Kulvatunyou National Institute of Standards and Technology -----Original Message----- From: Dave Pawson [mailto:email@example.com] Sent: Friday, December 01, 2006 8:59 AM To: firstname.lastname@example.org Subject: Re: [tag-discuss] Re: Test case metadata... The terminology issues is very rife round here. Please could the TC take some of the jargon and perhaps develop a glossary. Simple statement. My definitions are contra those of David M's and I find it hard to comprehend his terminology. regards