OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: [ebxml-iic] Test Case material, comments


Title: Test Case material, comments

Mike, Matt and all:

Attached are additional comments on Test Case material that Mike used in last f-2-f,
(merged with previous ones made during f-2-f, with issue owners)

The test case material design also affects Interoperability test cases,
so I send to the whole iic list.

Let us discuss this on Tuesday, if everyone could make it back from Labor Day...

Regards,

Jacques

 

In order to complete Test Case material design:
----------------------------------------------

1. [Matt]: specify CPA subset used. Which format should we pick? 
So far we get two candidates: tpaSample.xml? minicpa.xml from Hatem?
(quick comments on tpaSample.xml:
- SyncReplyMode options missing
- what is the distinction between Ack "expected" and "requested"?)

2. [Jeff]: we need to finalize message template data, in particular
- the way we parameterize these templates (XPath?)
- the way we build complete MIME envelopes and their content (either
using again a template approach - restrictive but simple - or some other
doc building.)

3. [Mike, Monica?] mapping of Test Cases to Test Assertions.
Can we really assume that there is always 1 test case for each test assertion?
I am sure that is the case for 98% of them, but it would be prudent to not preclude
the possibility of more than 1 test case for an assertion. A test case is always
more concrete than an assertion, could there be situations where it makes sense to
have two or more tests for a same assertion that we would not split?
My question is in fact: do we really have to decide on this, or can we adopt an
Test Case ID scheme that allow for this if we need it later:
it can be same as current assertion ID (e.g. urn:semreq:id:3), and in case we have 1-to-n, 
we can use additional letters: e.g. urn:semreq:id:3a, urn:semreq:id:3b, ... ? 
or dot numbering: urn:semreq:id:3.1, urn:semreq:id:3.2... 
would that be an issue?

4. Test Case table formatting [Mike,...]:
- Test Case ID field: see above remarks on numbering. (by the way, why "semreq"?)
- "Action Element" field: we could use more intuitive "step names", 
e.g. for the sending of message: "SendMessage" instead of "SetMessage".
- Also I strongly suggest that we make the "verification" of the test, a separate
and final step. (could be called "Verification").
- "Party" field: probably not needed, as it is always the TestDriver, as per our
definition of what a "step" is: an event that is always observable in TestDriver.
- "ErrorStatus" field needs revision. See below "Test failures".
- ErrorMessage: for each step is fine.
- "XPath" field: let us use a better name... should be more general , 
like "message expression" or something like that.

5. XPath / message expressions [Mike, Matt]:
- some XPath expressions are for building message material ("SetMessage" action),
some are for expressing "filters" to select the right message (GetMessage).
It would be good to distinguish them in syntax, e.g. the assignment operator "="
be distinguished from equal operator, like in programming languages (e.g. "==").
- GetMessage steps should not be aggregated with the final Verification
condition: GetMessage only contains filters to select the right message.
- For the final step (or Verification): will contain the boolean expression
that defines success (currently merged with the "filter" expression of GetMessage step
in current draft.)
- Use of parameters ($MessageId, etc.): it seems that these parameters need sometimes
to be set to current (e.g. received) material. That is not clear how it is done (see Case id:3)
We face two issues:
(a) how to "remember" message material from past test steps?
We could use XPath-based assignment, e.g. a GetMessage could contain filter
expressions as well as assignment expressions: e.g. $MessageId = <xpath expr>
(b) Across several steps, as several message are involved, and we may want to
refer material from  more than 1 step, we can use step# to identify the parameter: 
$1MessageId, $2MessageId...
- advanced verification conditions: sometimes verification conditions need more
than just constraints on message material: e.g. check that step N completed
within 10sec from Step M. It seems anyway we need to set a timeout for step completion.
What else? How to improve script language for this? When it comes to checking
that we got say 3 messages of a kind, e.g. for retries in reliability, 
could that be an enhancement of the GetMessage step? (where we would specify how
many messages of this kind need be received for the step to complete?)


6. Test Verification , and Test failures [Mike, Matt]:
- A separate step for this would be good, as mentioned above.
- sometimes a successful test needs to verify that no error message wwas received,
in addition to completing all its steps. How do we do that? Should we define
"Exception step(s)" to a test case, that will capture messages that should NOT occur...
and then when completed, generate test failure?
- important to distinguish two types of failure for a Test Case:
(a) "operation" failure, resulting from the impossibility to carry out the test properly.
e.g. some test step could not complete, for some reason unrelated to the spec requirements
that we are trying to test. 
Typically, this happens when the Test Requirement "pre-condition" cannot be realized.
In such case, the conformance report should NOT conclude that MSH implementation 
is not conforming, just that the test could not be performed.
(b) "conformance" failure, clearly showing that the spec requirement is not satisfied
by the MSH implementation.
Generally, the failures (a) correspond to some step that could not be completed.
So we could associate with each step either type of error: (1) failure causing
"operation" failure, (2) failure causing "conformance" failure.
- should we also make room for a "failure" expression in the verification step?
In other words, in case the "success" expression is not satisfied, we may
still need to distinguish the kind of test failure. A specific error message
could be associated with each kind.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC