OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic-conform message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Re: [ebxml-iic-conform] Coverage: last attempt


Jacques,

  I agree that it is a useful measuring tool for adjusting expectations of the MS TC on how well
we will cover the specification.  The key will be in understanding the wording of the requirements and determining
just how well they will cover that spec item.

  I will keep the current annotation and their qualifications of "full", "partial" and "none".  I want to go over those
qualifications again however, based upon the criteria you described below, as my interpretation was different at the time
I created them.  I will post an updated annotated spec and identify any potential changes for review.

  I will also annotate possible "new" items in the annotated spec that could be translated into a test requirement.  There are a few
items that I saw that are candidates for inclusion.

In addition I will annotate some spec items that are NOT annotated that are  already covered in test requirements.  For example, out test harness will be "validating" all incoming messages from a candidate MSH.  Many parts of the spec ( rather redundantly ) describe "valid" XML message format.
These items are not currently annotated ( giving the impression that we do not cover them in our test requirements.. when in fact
we do, with test requirement R0.1.1 .. syntax/schema validation).  Annotating these areas would give the TC folks the confidence that the specification is
well covered.  To oversimplify.. the more annotations the TC team sees, the more confidence they will have that we "covered" their spec.
Comments?

I would like to avoid "cluttering" the annotated spec with additional new "to be added" items if possible.   I feel that it gives the impression that we
are not really finished with defining test requirements.  I think that I can add anything "new" by the end of the week to the test requirements documents
and the annotated spec.

Regarding abstract test cases, I believe that we could submit a list of abstract Test Cases, each corresponding ( via ID ) to a matching Test Requirement.
Abstract test cases are "formalized" representation of the test cases and test steps, in a language that clearly defines the steps and actions to be
taken, but not coded in the language of the actual test.  Some abstract test cases are highly formalized ( e.g. for network protocol testing), while others are less formal.


An example of a less formal abstract test suite is: ( a test suite for interactive TV standard )

http://xw2k.sdct.itl.nist.gov/koo/trigger/

We could use the formalized notation you describe for a test case to populate an Abstract Test Suite:

Test Steps:
 
·       Step 1: Test Driver sends an initial message M0 to the Configurator action of the Test Service, containing expected configuration data.
o       Message Template: mtpl_1 (M-Header: mhdr_1,  M-Payload: mpld_config).
·       Step 2: Test Driver receives within time limit a response message M1 from Configurator action. Correlation: (M1.RefToMessageId = M0.MessageId). M1 reports that the expected configuration is in effect.
o       Message Template: mtpl_1 (M-Header: mhdr_1,  M-Payload: mpld_response (result=OK)).
·       Step 3: Test Driver sends a sample message M2 to the Mute action of the Test Service.
o       Message Template: mtpl_1 (M-Header: mhdr_1,  M-Payload: mpld_1).
o       Suggested Conversation ID: 11250.
o       M-Header updates: PartiId (both): no type attr, no URI content.
·       Step 4: Test Driver receives error message M3 within time limit. Correlation: (M3.RefToMessageID = M2.MessageId).
Step 5: Verification. Test Case succeeds if: (Step 2 successful) AND (Step 4 successful) AND  (M3 Error: severity=”Error”, code=”Inconsistent”.) 



Mike


At 05:31 PM 7/15/2002 -0700, Jacques Durand wrote:
Michael, Matt:
 
About "specification coverage" attribute:
my last attempt at making sense of this attribute in the annotated spec:
(I still believe that may "adjust" expectations from MS TC reviewers, and clarify our "contract"
with users about the results of our test suite ...)
 
COVERAGE:
 
FULL: The test requirement(s) that address this specification item, are a good indicator of conformance
 to the specification item, i.e. if an MSH passes a test case that implements properly this test requirement(s),
there is very strong indication that it will behave similarly in all situations identified by the spec item.
 
PARTIAL: The test requirement(s) that address this specification item, are only a partial indicator of conformance
 to the specification item, i.e. if an MSH passes a test case that implements properly this test requirement(s),
this indicates that it will behave similarly in only a subset of all situations identified by the spec item.
Possible reason maybe:
- the pre-condition(s) of the test requirement(s) only identify on purpose a subset of these situations,
that can be reasonably tested.
- the occurrence of situations that match the pre-condition of a Test Requirementit is under control of the MSH
(e.g. implementation-dependent) and out of the control of the testbed. I.e. the test will be verified only in case
the situation occurs during testing, which we cannot control.

 
NONE: this specification item cannot be tested even partially, at least with the Test Framework on which this
Conformance test suite is to be implemented, and under the test conditions that are assumed.
 
 
Regards,
 
Jacques
 


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC