OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic-msg message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: [ebxml-iic-msg] Re: [CTTF-MS]comments on ebXMLTestSuite



Jacques,

   Here is my response, embedded in your attched text file.

Regards,
Mike



At 12:04 PM 4/30/2002 -0700, Jacques Durand wrote:

Hello:

here are some (lengthy) comments on ebXMLTestSuite doc/design.

Cheers,

jacques

Mike, Matt: 

A first review of the material you posted some time ago - so far focused here
on ebXMLTestSuite.xml, which is really key to the automation of our tests.
Overall, I am impressed with the level of automation we seem to be able
to achieve, especially in the "analysis" part of the test.
Looks like XPath will be of great help here, as you and Matt suggested.
In addition, Schematron is to investigate also (Matt, what do you think of Schematron?)

My comments below are mostly driven by the "operational" aspects of the tests
(so that this XML material is easy to process by a testbed implementation) 
and by the expectation we can reuse most to drive Interoperability tests as well,
in addition to Conformance tests.

First comment [C1]:
As a context to my subsequent comments, I will assume that these conformance tests
will run on a testbed architecture (harness) that involves two Test driver components:
(a) an Application Test Driver (or AT-driver) (simulating application layer,
for feeding  message data and config data to
the candidate MSH, through its application API, but also for analyzing 
messages received by candidate and transferred to app layer)
(b) a Message Test Driver (or MT-driver) (simulating other party MSH, interacting
at wire-level with canidate MSH, sending messages to the
candidate MSH, and receiving messages from it. Analyzing these messages.)
So I assume that each Test Step of a given Test Case is executed by either one 
of these test components. 
NOTE: of course both test drivers could be implemented by the same piece of code, running
in same process, but this is the most general case.

[C2]
Assuming these two drivers are set-up, the TestStep element of a Test Case 
could specify which one (AT-driver or MT-driver) is concerned by the test. 
We could do this with the new attribute "stepDriver" below:
<ebTest:TestStep stepId="s1" stepName="Send a SOAP message from the candidate party" stepDriver="AT-Driver">
<ebTest:TestStep stepId="s2" stepName="Parse sent message" stepDriver="MT-Driver">
 That would take us one step further toward automation.

MIKE: I had originally included such an attribute, but left it out because I was not sure
how "synchronization" would occur between the AT and MT driver.  While each driver will
know "which" TestStep to run, I was not sure how each would know "when" to execute the TestStep.

[C3]
The main input of step S1 is message data. You provide a set of operation elements
(ebTest:SetMessagePackage, ebTest:SetHeader,..), to enable the test driver to drive
the candidate MSH (through adequate API calls) so that it builds the right message to send out.
Two suggestions here: 
- (1)  Could we wrap all these operations that prepare the same message, 
in a same XML parent element? e.g. <ebTest:SetMessage>.

MIKE: I believe a container element for "setting" portions of a message should be here.
This was overlooked and would provide a fundamental grouping of setting vs. getting message
values.
 
- (2) maybe we can assume some pre-existing "message templates" (or sample messages) 
so that we do not have to set all message elements each time, but only those we want 
to override on the template. 

MIKE: If we use templates to build a message, then we could "relax" the constraints on the
ebXMLTest.xsd schema to make everything "optional".  This could allow modifications of
selected parts of the current message without having to re-define "mandantory" portions of the
message.  This may be possible if we can come up with a small reuseable set of templates.


So the <ebTest:SetMessage> tag would also specify the template we want to derive from, 
e.g. <ebTest:SetMessage mesgtmpl="mesg01_testXYZ">.

[C4]
we may also consider an option (in addition to the way you do it now), 
where we do NOT specify the message data content directly
inside the TestStep element, but instead do it in another document 
that our TestStep element would reference, example: 
<ebTest:TestStep stepId="s1" stepName="Send a SOAP message from the candidate party" 
		 stepDriver="AT-Driver"
		 stepMessage="setmessageXYZ">
(or we could use more formal referencing, e.g. XLink?)


Such an option is motivated by: 
- several test cases may reuse same message input 
(e.g. identified by "setmessageXYZ" above), so that we do not want to repeat them. 
Same for other operations like message analysis.
- keeping test cases and test steps definitions more high-level or abstract
and make the document more readable. The Test Case doc for simple r1.1.6 is already busy,
future tests cases might get cluttered with too detailed message or analysis data. 

MIKE:  This is an option. Re-use of TestSteps could also be accomplished through the use
of IDREF's to the unique "stepId" associated with each TestStep within the TestSuite file.
I personally try to avoid "files" whenever possible, as it it yet another headache for
bookeeping/naming etc.  Assuming the majority of the coding will exist in the templates,
the coding necessary to "set" or "get" values would be fairly small I believe.


[C5] The TestTrace schema seems to be general enough so that it could be used
not only as output of the MT-driver (as in S2), but also of the AT-driver, for 
monitoring the messages received by the candidate MSH.
That would especially be useful for Interoperability tsets - but also for
some conformance tests (e.g. when checking if the right message is passed
from MSH candidate, to its app.) We need to explore this.

MIKE: Hopefully we can achieve both interoperability and conformance goals with a good
trace format.

[C6] I am a little unclear whether the way we associate a trace with a test, and with
the input messages for this test, is sufficient or not.
When a message sent by the candidate MSH is received by the MT-driver
(as in r1.1.6 step S2), I'd like to be convinced we are going to use it in the
"right test": because I am assuming that the tests may not always be done in a 
purely sequential way. Possible following scenarios should be supported
(maybe they are, but not sure):

(sc1)- the same test case is repeated several times, and we may want to distinguish
the test data ( and test condition) for each. I.e. the input message crafted by
each instance of the same test will be different - e.g. using different messageID -
and the test condition, as specified by the parseTrace element, will target the
corresponding message. I guess that can be done by adding messageID condition
(in XPath or Schematron rule context) in order to distinguish the right trace,
in addition to the Test Req#.

MIKE: This is a good point.  My answer is that I envision driving the tests from the ebXMLTestRequirements.xml 
file. The way that I would see us providing "unambiguous" identification of TestCase and TestStep
data would be through the "inheritence" of the properties of the TestRequirement defined in 
ebXMLTestRequirements.xml  That way, defining the "context" of a TestStep would require the
XPath expression to include something like:

/TestTrace/TestCase [@mainRequirementReferenceId='1.1' and @requirementReferenceId='1.1.6']/TestStep[@id='s2']/GetMessagePackage

( the MT and AT would have to provide the "mainRequirementReferenceId" as an attribute in the trace to further differentiate
TestSteps and/or TestCases ).  This requires a fundamental ability of the TestCase to "extend" the TestRequirement class and
inherit its properties and methods.  I am hoping that we could do that with our test harness.


(sc2)- Also, it may happen that for some reason, the message 
currently received by the MT-driver is not related to the "current" test, 
but to a previous one (e.g. due to resendings in case of retries, etc.)

MIKE:  This is more complex, in that the MT is receiving a message, possibly asynchronously, or from
a previous test, and since it is not getting its context information through the driver XML, it would need
some kind of "external context" information.  I had previously suggested using the message itself ( either
as an additional payload, or perhaps stored elsewhere in the message ) to provide this context.  This 
would solve the "synchronization" problem between MT and AT drivers, and provide the necessary context
to the trace generator. 

So these scenarios require unambiguous correlation of input messages with output trace.
Also: the trace is supposed to contain the test req (e.g. r1.1.6, and possibly the step # e.g. S2)
how/when are these added to the trace?


 If this info comes from the TestCase doc that
is fed to the MT-driver, then that puts a strong requirement on the order of
messages received, that should match the order the MT-driver is getting test case data.
Could we have the trace built only from message data? Can we then introduce test req references
- e.g. r1.1.6 - in this mesge data? (e.g. in Service/Action field, or PartyID...)

MIKE: I reference my statement above, that such information could be added via inheritence of  TestRequirement
properties... insufficient for "asynchronous responses" or... by passing this information through the actual 
ebXML message as an additional payload or parameter in the message.


[C7]
Here is the operational behavior of the test execution I would expect.
Let me know if that's what you had in mind:


(a)- The testbed (AT-driver + MT-driver) is being fed with a TestSuite doc, which is
a sequence of TestCases. Each TestCase describes a sequence of TestSteps.
Some TestSteps are to be executed by the AT-driver, some by the MT-driver.
Normally, everything will execute in the right order. But we do not assume
a perfectly sequential order of the tests. (e.g. some messages may arrive in
different order, and we do not assume a notification of end of Test #123 before
starting Test#124.)

MIKE: My initial thought is that the testbed is fed from TestRequirements.doc
( via an XML parse of the requiremements, and the Condition Clause(s) and Assertion ).
Corresponding to each Condition and Assertion is a single TestCase, with a matching requirementReferenceId.
Each TestCase will result in a Pass/Fail (True/False) which will ultimately determine whether each
SemanticRequirement, and ultimately every TestRequirement evaluates to a Pass/Fail.  The 
"Clause" logic of the SemanticRequirement allows for some complex nesting of boolean tests.


(b)- any message received (e.g. by MT-driver) is appended to a Trace file, 
in the format described by the TestTrace schema, in the order of reception.
The "requirementReferenceID" attribute is set by using the test case # (e.g. 1.1.6)
that is reported in some field of the message. (Same for the test case step?)  

MIKE: Based on your previous observation of possible ambiguities, I would also include the "mainRequirementReferenceId" 
(always unique ) to further disambiguate "re-used" TestCases and TestSteps


(c)- the "ParseTrace" command (e.g. as described in TestStep S2 of 1.1.6) 
will be passed to the MT-driver, at a time that may not match exactly the
time the test is executed (message sent).
The command has enough data to discriminate the right trace subset (the Test # e.g. 1.1.6, 
and possibly the expected MessageID).

MIKE: Yes.  As long as there is a unique context identifier in the form of mainTestRequirementId, 
combined with requirementReferenceId ( however we get this information... most likely through
message passing ), it does not matter where in the trace this message appears.



(d)- the MT-driver maintains a queue of "ParseTrace" orders, but does not trust the next
message received to be for the next ParseTrace order - depends on when the associated 
message is received,
due to network latency, "noise" messages, overlapping tests, etc... 
So the MT-driver is periodically watching the
trace file, and tries to identify messages in it that match its next parseTrace order. 
If a parseTrace order can be executed on an instance of the trace, it executes the 
related test condition.

MIKE: This sounds like a good way to solve the "latency" problem that can be introduced 
in so many ways.

So the interest of this way to operate, is that the trace can be parsed/checked
at any time, in a de-coupled way with the actual execution of the tests.

MIKE: Yes this can allow a de-coupling of the execution of the tests from the parsing
of the trace.  However, we need to address the driving of the test suite from the
ebXMLRequirements.xml document.  The purpose of using the requirements document as the driver
is to allow us to make smart evaluations of what may be some rather complex Conditions
and Assertions making up a testing requirement.. and to point to where in the boolean evaluation
of these Assertions that things may go wrong. Did they occur in a pre-condition?  Was that
pre-condition a requirement? or optional?   Failure of a required Condition or Assertion ultimately
means failure of the TestRequirement. Following the logic of the SemanticRequirement Clause allows us
to logically evaluate TestCases in a meaningful way.

So while we have all of the ParseTrace commands queued up for boolean evaluation of a particular SemanticRequirement,
we need the Conditional Clause ( if present ) to create a logical grouping and evaluation of ParseTrace commands 
into a meaningful test evaluation.

This may require a "re-parse" or "re-traversal" of the TestRequirements DOM in order to meaningfully evaluate
the queued ParseTrace commands.







[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC