OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic-conform message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Subject: RE: [ebxml-iic-conform] Coverage: last attempt


   I concur with all of your comments below ( no need to re-read this again ).  Only comment:
I see a "1 Test Case to 1 Requirement" relationship.  Right now, Test Cases are "locked" to
a matching Test Requirement by ID. 
   In what situation might one have a Test Case testing 2 Test Requirements with 2 different contexts?


At 11:23 AM 7/16/2002 -0700, Jacques Durand wrote:
sounds good. See inline.
-----Original Message-----
From: Michael Kass [mailto:michael.kass@nist.gov]
Sent: Tuesday, July 16, 2002 10:13 AM
To: Jacques Durand; 'Matthew MacKenzie'
Cc: 'ebxml-iic-conform@lists.oasis-open.org'
Subject: Re: [ebxml-iic-conform] Coverage: last attempt


  I agree that it is a useful measuring tool for adjusting expectations of the MS TC on how well
we will cover the specification.  The key will be in understanding the wording of the requirements and determining
just how well they will cover that spec item.

  I will keep the current annotation and their qualifications of "full", "partial" and "none".  I want to go over those
qualifications again however, based upon the criteria you described below, as my interpretation was different at the time
I created them.  I will post an updated annotated spec and identify any potential changes for review.
[Jacques Durand] coverage annotation you already have will probably not change much I guess . Cases where the MSH specified
behavior is "internal" and cannot be observed as you mentioned (e.g. persistence of messages) will get either NONE coverage, or
possibly PARTIAL in case we can use some black-box test reqs that can still reveal the right behavior in specific situations
(I believe we had suchh a test req where MSH is shut down and restarted, revealing that some persistence was used.).

  I will also annotate possible "new" items in the annotated spec that could be translated into a test requirement.  There are a few
items that I saw that are candidates for inclusion.
[Jacques Durand]  in case you are concerned it would take more time to cover these, we can annotate with something like "TBC"
(to be covered) so that  its clear we know they need be addressed (later).

In addition I will annotate some spec items that are NOT annotated that are  already covered in test requirements.  For example, out test harness will be "validating" all incoming messages from a candidate MSH.  Many parts of the spec ( rather redundantly ) describe "valid" XML message format.
[Jacques Durand] in case this mention of conformance is only incidental (not central to the spec statement where you find it )
and has more clearly been mentioned (and covered) before, I would not bother...
These items are not currently annotated ( giving the impression that we do not cover them in our test requirements.. when in fact
we do, with test requirement R0.1.1 .. syntax/schema validation).  Annotating these areas would give the TC folks the confidence that the specification is
well covered.  To oversimplify.. the more annotations the TC team sees, the more confidence they will have that we "covered" their spec.

I would like to avoid "cluttering" the annotated spec with additional new "to be added" items if possible.   I feel that it gives the impression that we
are not really finished with defining test requirements.  I think that I can add anything "new" by the end of the week to the test requirements documents
and the annotated spec.
[Jacques Durand] OK. Note that in case the lacking test reqs are confined to some optional spec modules, for which
lots of efforts would still be needed, we can postpone the coverage of this spec module .

Regarding abstract test cases, I believe that we could submit a list of abstract Test Cases, each corresponding ( via ID ) to a matching Test Requirement.
Abstract test cases are "formalized" representation of the test cases and test steps, in a language that clearly defines the steps and actions to be
taken, but not coded in the language of the actual test.  Some abstract test cases are highly formalized ( e.g. for network protocol testing), while others are less formal.
[Jacques Durand] sounds good.( Incidentally, I guess it's OK for a test case to cover more than one test req item?)

An example of a less formal abstract test suite is: ( a test suite for interactive TV standard )


We could use the formalized notation you describe for a test case to populate an Abstract Test Suite:

Test Steps:
       Step 1: Test Driver sends an initial message M0 to the Configurator action of the Test Service, containing expected configuration data.
o       Message Template: mtpl_1 (M-Header: mhdr_1,  M-Payload: mpld_config).
       Step 2: Test Driver receives within time limit a response message M1 from Configurator action. Correlation: (M1.RefToMessageId = M0.MessageId). M1 reports that the expected configuration is in effect.
o       Message Template: mtpl_1 (M-Header: mhdr_1,  M-Payload: mpld_response (result=OK)).
       Step 3: Test Driver sends a sample message M2 to the Mute action of the Test Service.
o       Message Template: mtpl_1 (M-Header: mhdr_1,  M-Payload: mpld_1).

o       Suggested Conversation ID: 11250.
o       M-Header updates: PartiId (both): no type attr, no URI content.
       Step 4: Test Driver receives error message M3 within time limit. Correlation: (M3.RefToMessageID = M2.MessageId).

Step 5: Verification. Test Case succeeds if: (Step 2 successful) AND (Step 4 successful) AND  (M3 Error: severity="Error", code="Inconsistent".) 
[Jacques Durand] Agree that level of detail is sufficient for now. In a future version, we can add the XML scripting for these
once we have written enough of these cases... (We may introduce such scripts for a few test cases, as examples).



At 05:31 PM 7/15/2002 -0700, Jacques Durand wrote:
Michael, Matt:
About "specification coverage" attribute:
my last attempt at making sense of this attribute in the annotated spec:
(I still believe that may "adjust" expectations from MS TC reviewers, and clarify our "contract"
with users about the results of our test suite ...)
FULL: The test requirement(s) that address this specification item, are a good indicator of conformance
 to the specification item, i.e. if an MSH passes a test case that implements properly this test requirement(s),
there is very strong indication that it will behave similarly in all situations identified by the spec item.
PARTIAL: The test requirement(s) that address this specification item, are only a partial indicator of conformance
 to the specification item, i.e. if an MSH passes a test case that implements properly this test requirement(s),
this indicates that it will behave similarly in only a subset of all situations identified by the spec item.
Possible reason maybe:
- the pre-condition(s) of the test requirement(s) only identify on purpose a subset of these situations,
that can be reasonably tested.
- the occurrence of situations that match the pre-condition of a Test Requirementit is under control of the MSH
(e.g. implementation-dependent) and out of the control of the testbed. I.e. the test will be verified only in case
the situation occurs during testing, which we cannot control.

NONE: this specification item cannot be tested even partially, at least with the Test Framework on which this
Conformance test suite is to be implemented, and under the test conditions that are assumed.

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Powered by eList eXpress LLC