OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic-conform message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Subject: RE: [ebxml-iic-conform] Test Requirement Coverage document,first iteration

Title: RE: [ebxml-iic-conform] Test Requirement Coverage document, first iteration


You wrote:
1) We will need to lay down some solid criteria for
defining coverage as "full", "partial" and "none".  The major issue ( in my
opinion ) will have to do with how much our
test harness will be able to peer into and manipulate the internals of a
candidate MSH.  I think that we need
a list of quantified criteria in order to make these coverage labels more

<Jacques> Yes, I think it is possible to figure how much we can control the MSH,
just by looking at the test req. If we can't "peer" into the MSH well enough,
that may be a NONE or a PARTIAL.

2) This scheme seems to work well to identify exactly what has ( and has
not ) been covered in the spec.  I see
more potential parts of the spec to cover, based upon the annotation. Due
to limited time, it may be best to submit
as is and get feedback from the TC, rather than iterate more on the
requirements.  Comments?

<Jacques> I guess we can address these later. Or, should the un-covered
parts still be identified and annotated NONE? or TBD?

   My idea for criteria for coverage is pretty simple:

1) "Full" = our test case leaves no doubt that this item in the spec has
been tested fully ( this is where the vast majority of our test
requirements fall )
2) "Partial" = Due to limitations ( either in the test service, the test
party software or rigor in writing a test case for all potential
possibilities ), this requirement could not be completely tested
3) "None" = Due to limitations in the test service, the test party software
we could not test this requirement at all

<Jacques> that seems to be pretty consistent with my flavor of these defs
I sent in an earlier mail today. (I still prefer to use a more
formal wording than "we could not test..." as
at this point we only can estimate how well the test req can
 validate an implementation against the spec item, and behind the test req,
the test case that implements it.)

   The main problem that I see is defining just what the capabilities of
the test service and the test party software will be.  For example, will we
be performing "interrupts" of
MSH service to test reliable messaging, will we be able to check digital
signatures, will we be able to check message persistence on the candidate
side?  These questions,
and more need to be answered before a reliable estimation of coverage can
be made, in my opinion.

<Jacques> right, this is precisely the kind of consideration that will
determine the coverage, having in mind the limitations of our test bed
(due to its black box appproach). So, we cannot check the validitation of a signature
inside the MSH. But we can check its effect.
if the Test Req is expressed like:
[precond]:" a signed message is received"
[assertion]: "the signature is correctly validated".
Then coverage is NONE as there is no way the testbed can peek into the MSH
to see the Dsig check.
But if we express the test as:
[precond]:" a signed message , with a valid signature, is received"
[assertion]: "the message is properly passed to the application".
This we can check  and control, as we can produce a valid sig. Coverage = FULL.
In addition, we need add the test:
[precond]:" a signed message , with an unvalid signature, is received"
[assertion]: "message is not passed to app, and error is generated".
This we can also check  and control, as we can produce an unvalid sig. Coverage = FULL.
I think this is precisely where the art of writing Test Reqs really makes a difference with
original spec...


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Powered by eList eXpress LLC