OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Durand [ebxml-iic] 9/2/2002: Test Case Material Development


Jacques,
My inputs to the questions posed.
 
Monica

	-----Original Message----- 
	From: Jacques Durand 
	Sent: Fri 8/30/2002 1:00 PM 
	To: 'michael.kass@nist.gov'; 'matt@xmlglobal.com' 
	Cc: 'ebxml-iic@lists.oasis-open.org' 
	Subject: [ebxml-iic] Test Case material, comments
	
	

	Mike, Matt and all: 

	Attached are additional comments on Test Case material that Mike
used in last f-2-f, 
	(merged with previous ones made during f-2-f, with issue owners)
The test case material design also affects Interoperability test cases, 
	so I send to the whole iic list. 

	Let us discuss this on Tuesday, if everyone could make it back
from Labor Day... 

	Regards, 

	Jacques 

	

In order to complete Test Case material design:
----------------------------------------------

1. [Matt]: specify CPA subset used. Which format should we pick? 
So far we get two candidates: tpaSample.xml? minicpa.xml from Hatem?
(quick comments on tpaSample.xml:
- SyncReplyMode options missing
- what is the distinction between Ack "expected" and "requested"?)

2. [Jeff]: we need to finalize message template data, in particular
- the way we parameterize these templates (XPath?)
- the way we build complete MIME envelopes and their content (either
using again a template approach - restrictive but simple - or some other
doc building.)

3. [Mike, Monica?] mapping of Test Cases to Test Assertions.
Can we really assume that there is always 1 test case for each test assertion?
I am sure that is the case for 98% of them, but it would be prudent to not preclude
the possibility of more than 1 test case for an assertion. A test case is always
more concrete than an assertion, could there be situations where it makes sense to
have two or more tests for a same assertion that we would not split?
My question is in fact: do we really have to decide on this, or can we adopt an
Test Case ID scheme that allow for this if we need it later:
it can be same as current assertion ID (e.g. urn:semreq:id:3), and in case we have 1-to-n, 
we can use additional letters: e.g. urn:semreq:id:3a, urn:semreq:id:3b, ... ? 
or dot numbering: urn:semreq:id:3.1, urn:semreq:id:3.2... 
would that be an issue?

[mm1: In all honesty, I think over time we could have M-M (test-case to assertion) so I suggest we
provide for extensibility.  This is a discussion item similar to my previous one regarding
aggregation of test cases for a type of scenario or lifecycle of functionality testing.
And, to, we need to allow for granularity of the test assertion and cases.]

4. Test Case table formatting [Mike,...]:
- Test Case ID field: see above remarks on numbering. (by the way, why "semreq"?)
- "Action Element" field: we could use more intuitive "step names", 
e.g. for the sending of message: "SendMessage" instead of "SetMessage".
- Also I strongly suggest that we make the "verification" of the test, a separate
and final step. (could be called "Verification").
- "Party" field: probably not needed, as it is always the TestDriver, as per our
definition of what a "step" is: an event that is always observable in TestDriver.

[mm1: What if in the future the Party is actually observable in the Test Service?]
[mm1: YES! Separate the test from the verification - this is a key concept for a test
framework and just because there is a test, this does not infer verification.]

- "ErrorStatus" field needs revision. See below "Test failures".
- ErrorMessage: for each step is fine.
- "XPath" field: let us use a better name... should be more general , 
like "message expression" or something like that.


5. XPath / message expressions [Mike, Matt]:
- some XPath expressions are for building message material ("SetMessage" action),
some are for expressing "filters" to select the right message (GetMessage).
It would be good to distinguish them in syntax, e.g. the assignment operator "="
be distinguished from equal operator, like in programming languages (e.g. "==").
- GetMessage steps should not be aggregated with the final Verification
condition: GetMessage only contains filters to select the right message.
- For the final step (or Verification): will contain the boolean expression
that defines success (currently merged with the "filter" expression of GetMessage step
in current draft.)

[mm1: See question above about where we could have 1-M (test assertion to test case).  Is it not applicable
that the verification as a complement to but not part of the test, we do have this condition - 1 test assertion,
that results in (1) test of the case, and (2) verification of the case?  Either way the verification should be separate.]

- Use of parameters ($MessageId, etc.): it seems that these parameters need sometimes
to be set to current (e.g. received) material. That is not clear how it is done (see Case id:3)
We face two issues:
(a) how to "remember" message material from past test steps?
We could use XPath-based assignment, e.g. a GetMessage could contain filter
expressions as well as assignment expressions: e.g. $MessageId = <xpath expr>
(b) Across several steps, as several message are involved, and we may want to
refer material from  more than 1 step, we can use step# to identify the parameter: 
$1MessageId, $2MessageId...
- advanced verification conditions: sometimes verification conditions need more
than just constraints on message material: e.g. check that step N completed
within 10sec from Step M. It seems anyway we need to set a timeout for step completion.
What else? How to improve script language for this? When it comes to checking
that we got say 3 messages of a kind, e.g. for retries in reliability, 
could that be an enhancement of the GetMessage step? (where we would specify how
many messages of this kind need be received for the step to complete?)

[mm1: Regardless of what path is chosen, keep in mind - extensibility and discrete definition.  If we continue
to embed important data elements in the expressions, our capability to identify it (should it need to be searched
for and found) may be more difficult.  I keep thinking about the database days when we concatenated data and then
expected to search for an text string, augh.]

[mm1: ON advanced verification conditions, I believe we are seeing we not only have conditions on the assertion but pre-
and post-conditions on the test case itself as well as the verification phase (not really metadata though).
Can we attach those conditions to the test case itself, where the conditions are "included" with the test case?]

6. Test Verification , and Test failures [Mike, Matt]:
- A separate step for this would be good, as mentioned above.
- sometimes a successful test needs to verify that no error message wwas received,
in addition to completing all its steps. How do we do that? Should we define
"Exception step(s)" to a test case, that will capture messages that should NOT occur...
and then when completed, generate test failure?

[mm1: We have discussed exceptions at length in BCP, and exceptions may not always be errors - they may be less 
traveled paths.  So, to err on the side of future function, I would suggest you allow for both exception
steps and outcome (test failure or test incomplete? - do we see these as different? - links with Jacques question
below).

- important to distinguish two types of failure for a Test Case:
(a) "operation" failure, resulting from the impossibility to carry out the test properly.
e.g. some test step could not complete, for some reason unrelated to the spec requirements
that we are trying to test. 
Typically, this happens when the Test Requirement "pre-condition" cannot be realized.
In such case, the conformance report should NOT conclude that MSH implementation 
is not conforming, just that the test could not be performed.
(b) "conformance" failure, clearly showing that the spec requirement is not satisfied
by the MSH implementation.
Generally, the failures (a) correspond to some step that could not be completed.
So we could associate with each step either type of error: (1) failure causing
"operation" failure, (2) failure causing "conformance" failure.
- should we also make room for a "failure" expression in the verification step?
In other words, in case the "success" expression is not satisfied, we may
still need to distinguish the kind of test failure. A specific error message
could be associated with each kind.

[mm1: May be 3 types of 'failures' - not certain if failures is the best word - (1) system - outside
of the test condition that affects the test, (2) operation - condition completion to test the assertion, and
(3) conformance failure - affecting testing test assertion itself as defined in a test case(s).  Can
we differentiate a system from an operation failure? How will we be able to differentiate which failure
type occurred?  Some of it may be outside of the visibility or scope of the test framework - does this require
data from the testing node we interact with?]



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC