OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic-msg message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: RE: [ebxml-iic-msg] comments on MS Conf TestSuite (3)


To all,

    my comments tagged as **MIKE:


Mike

>
May 2nd comments: <Jacques2>

------------------------------------------
Mike, Matt: 

A first review of the material you posted some time ago - so far focused here
on ebXMLTestSuite.xml, which is really key to the automation of our tests.
Overall, I am impressed with the level of automation we seem to be able
to achieve, especially in the "analysis" part of the test.
Looks like XPath will be of great help here, as you and Matt suggested.
In addition, Schematron is to investigate also (Matt, what do you think of Schematron?)

My comments below are mostly driven by the "operational" aspects of the tests
(so that this XML material is easy to process by a testbed implementation) 
and by the expectation we can reuse most to drive Interoperability tests as well,
in addition to Conformance tests.

First comment [C1]:
As a context to my subsequent comments, I will assume that these conformance tests
will run on a testbed architecture (harness) that involves two Test driver components:
(a) an Application Test Driver (or AT-driver) (simulating application layer,
for feeding  message data and config data to
the candidate MSH, through its application API, but also for analyzing 
messages received by candidate and transferred to app layer)
(b) a Message Test Driver (or MT-driver) (simulating other party MSH, interacting
at wire-level with canidate MSH, sending messages to the
candidate MSH, and receiving messages from it. Analyzing these messages.)
So I assume that each Test Step of a given Test Case is executed by either one 
of these test components. 
NOTE: of course both test drivers could be implemented by the same piece of code, running
in same process, but this is the most general case.

[C2]
Assuming these two drivers are set-up, the TestStep element of a Test Case 
could specify which one (AT-driver or MT-driver) is concerned by the test. 
We could do this with the new attribute "stepDriver" below:
<ebTest:TestStep stepId="s1" stepName="Send a SOAP message from the candidate 
party" stepDriver="AT-Driver">
<ebTest:TestStep stepId="s2" stepName="Parse sent message" stepDriver="MT-Driver">
 That would take us one step further toward automation.

MIKE: I had originally included such an attribute, but left it out because I was not sure
how "synchronization" would occur between the AT and MT driver.  While each driver will
know "which" TestStep to run, I was not sure how each would know "when" to execute the TestStep.

*MIKE: I will add this attribute in the next release of the ebXMLTestSuite.xsd schema

<Jacques> My expectation is that the only "synchronization" needed between At and MT,
can be achieved through the trace that each produces on the other. 
An example: assume a request-response, where MT sends a message to candidate MSH (testStep_1 ), 
and expects a "business" response prepared by AT.
TestCaseX = testStep_1 (on MT) + testStep_2 ("receive message" on AT) + 
testStep_3 ("send corresponding response message" on AT) + testStep_4 (get message on MT).
If there is enough info in the message trace alone to correlate it with the right testCase
instance, then each of these 4 steps can synchronize the next one, and even allow for
"noisy" trace in the middle... so hopefully that should be sufficient. Opinion?

*MIKE: If we can find the right place for this syncronization information within the message,
then I like this approach.  I think that the MessageId element is a logical location for this 
information.  A study of the testing requirements involving the MessageId element should reveal possible
impact/problems of using it as a synchronization tool for testing.

<Jacques2> 
We need to look into that. But I suspect we do not even need to mingle with the MessageID:
we can use perfectly "normal" messageIDs. The Trace format as was proposed,
would need be updated. It would go like this:

- A Trace for a particular TestCase - whether generated from AT or from MT driver or both - 
would appear in the trace file (in fact in two trace files, as there may be one for AT
and one for MT since they may be remote from each other) as a list of separate "trace items", 
to be correlated between them mostly based on the MessageID info they contain.

- The first trace item generated (by whichever component initiates the TestCase, either on AT or MT )
 would be a "TestCase header" (containing Test Req info, etc.) 
That would be the only time where TestCase info would appear in the trace for this TestCase.
For example in TestCaseX previously mentioned above:

1- MT would start by generating in its trace file, 
a <TraceHeader TestCase="r1.2.3"... MessageID="123456789"...> item.
Note that we introduce in this header the MessageID of the message to be generated in the first TestStep.

2- Then, when executing testStep_1, MT will generate in its local trace, another separate trace item 
for the generated message, with MessageID="123456789". Note that even though this trace item is separate
from the "header" item (not nested into each other), the ultimate reporting tool will be able 
to correlate them thanks to MessageID.

3- When AT executes testStep_2, it reports the same (received) MessageID in its local trace item. 
Indeed, even if AT simulates the application level, the candidate MSH is supposed to pass 
the msg ID of received messages to it.
(one use of it, for the business app, is to set the RefToMessageID of the response business message). 
So here again, when we merge later the AT trace with the MT trace for reporting or maybe for later stage
test conditions (e.g. on timing between messages, etc.), we'll be able to correlate all trace items
based on MessageID.

4- When AT executes testStep_3, it reports the new (sent) MessageID (e.g. = 555566667)
of its business response in its local trace item. 
But it also reports in the trace item the RefToMessageID that contains the previous MessageID...
So here again the two trace items generated by AT for TestCaseX can be correlated 
(even in case other foreign trace items, corresponding to other received messages, 
are generated in-between). Only freak case I can imagine is when we want to set 
on purpose a "bad" RefToMessageID for testing... but here again we could use a formatting convention
that would still allow for passing the previous MessageID (e.g. RefToMessageID = "BAD_123456789").

5- Then, when executing testStep_4, MT will generate in its local trace, a separate trace item for
the response message (MessageID = "555566667", RefToMessageID = "123456789"). Here again
we can correlate based on RefToMessageID.

So even though all these trace items are generated in a totally "asynchronous" way, based on 
messages received, and even though these messages do not contain explicit TestCase info,
we should be able to correlate them all (and their trace) within the same TestCase.
</Jacques2>

**MIKE: This looks like a good way to link common messages within a TestCase. Since messages will be 
coming "asynchronously", as you suggest, the current TestTrace.xsd schema will need to be restructured ( as TestStep
messages could appear in any sequence, and will not be nested within a <TestCase> ( i.e. their context is 
uniquely defined by the MessageId )  I will redesign the TestTrace.xsd schema to reflect that ).  

**MIKE: A single <TraceHeader>, as you describe above would provide the necessary metadata for generating a test report
( through its MessageId attribute, pointing to the first TestStep MessageId ).
The report generator, driven by "ParseTrace" commands generated by the TestSteps ( in a sequential manner), 
searches through the trace XML file for unique MessageId's. The report generator gives a PASS/FAIL to TestSteps,
and ultimately TestCases, and TestRequirements based upon XPath evaluation of the message trace content.
 Do I have this right?


[C3]
The main input of step S1 is message data. You provide a set of operation elements
(ebTest:SetMessagePackage, ebTest:SetHeader,..), to enable the test driver to drive
the candidate MSH (through adequate API calls) so that it builds the right message to send out.
Two suggestions here: 
- (1)  Could we wrap all these operations that prepare the same message, 
in a same XML parent element? e.g. <ebTest:SetMessage>.

MIKE: I believe a container element for "setting" portions of a message should be here.
This was overlooked and would provide a fundamental grouping of setting vs. getting message
values.


<Jacques> this <ebTest:SetMessage> would be that container, right?
I think that would in fact correspond to a high-level operation on the AT-driver
(same for the MT-driver). Other operations would be: "getMessage" (or "waitMessage"), 
"setMSHconfig".
 
*MIKE: I was thinking that <ebTest:SetMessage> would be the container, and that it would contain
<ebTest:SetMessageHeader> ( for manipulating the message envelope ) <ebTest:SetHeader> <ebTest:SetBody> and <ebTest:SetPayload> ( for manipulating each SOAP part )
treating each as a seperate portion of the message to be modified.
*MIKE: I will modify the ebXMLTestSuite.xsd schem and create an <ebTest:SetMessage> container element.  
It will contain <ebTest:SetMessageHeader/>, <ebTest:SetBody/> and <ebTest:SetPayload/> elements.

<Jacques2> sounds good.
</Jacques2>

- (2) maybe we can assume some pre-existing "message templates" (or sample messages) 
so that we do not have to set all message elements each time, but only those we want 
to override on the template. 

MIKE: If we use templates to build a message, then we could "relax" the constraints on the
ebXMLTest.xsd schema to make everything "optional".  This could allow modifications of
selected parts of the current message without having to re-define "mandantory" portions of the
message.  This may be possible if we can come up with a small reuseable set of templates.

MATT: We could also consider composition via XPath-ish declarations, e.g.

MessageHeader/From/PartyId@type=uri
MessageHeader/From/PartyId=http://foo.com/msh
MessageHeader/Service=TestService
MessageHeader/Action=test-001
...
attachment(cid:foo, data/test-001/att1.xml)

<Jacques> I would vote for the XPath-ish solution for message input, as it is in fact less
dependent on a particular message schema. Also, feels more appropriate for input of
AT-driver ( feeding a well-formed message ebXML header
to a test driver that is supposed to break it down in order to generate API(s) calls to
the candidate MSH that would then re-create it, is a little awkward.)
This is still compatible with the template approach.

*MIKE: I also like the XPath-like approach.. but was not sure how easy it would be to
implement.  It is much more succint and direct, and I think very suitable for manipulating
small amounts of content described by a template ( not so good for manipulating large content portions of a message... then
the more verbose method I describe above would be more suitable ).

<Jacques2> I guess we would use XPath only to identify small elements, e.g. substitute atomic
parameter values to their values... larger parts of a message would be inherited from the "msg template"
we use to create a message, i.e. if a significant change is required for a new message, we would create 
a new template for it.
</Jacques2>

Could you guys try to come up with more explicit definitions of "operation-level" elements 
on AT-driver (and MT-driver), that would be containers of such detailed data?

*MIKE: I like the message header extension testing requirements.  I will use them as an example set of 
TestCases that use the XPath-like syntax to describe setting and getting message values.


I guess same can be said for receiving messages ("getMessage"?) - in that case,
one part of the element would be in charge of selecting the right trace element (received message),
the other would actually specify the Test condition to check?

*MIKE: Yes this is correct.

So the <ebTest:SetMessage> tag would also specify the template we want to derive from, 
e.g. <ebTest:SetMessage mesgtmpl="mesg01_testXYZ">.

[C4]
we may also consider an option (in addition to the way you do it now), 
where we do NOT specify the message data content directly
inside the TestStep element, but instead do it in another document 
that our TestStep element would reference, example: 
<ebTest:TestStep stepId="s1" stepName="Send a SOAP message from the candidate party" 
		 stepDriver="AT-Driver"
		 stepMessage="setmessageXYZ">
(or we could use more formal referencing, e.g. XLink?)


Such an option is motivated by: 
- several test cases may reuse same message input 
(e.g. identified by "setmessageXYZ" above), so that we do not want to repeat them. 
Same for other operations like message analysis.
- keeping test cases and test steps definitions more high-level or abstract
and make the document more readable. The Test Case doc for simple r1.1.6 is already busy,
future tests cases might get cluttered with too detailed message or analysis data. 

MIKE:  This is an option. Re-use of TestSteps could also be accomplished through the use
of IDREF's to the unique "stepId" associated with each TestStep within the TestSuite file.
I personally try to avoid "files" whenever possible, as it it yet another headache for
bookeeping/naming etc.  Assuming the majority of the coding will exist in the templates,
the coding necessary to "set" or "get" values would be fairly small I believe.

<Jacques> sounds fair. Maybe adding a little more structure ("containers") to the operations
we conduct in each step, is enough to control complexity.

[C5] The TestTrace schema seems to be general enough so that it could be used
not only as output of the MT-driver (as in S2), but also of the AT-driver, for 
monitoring the messages received by the candidate MSH.
That would especially be useful for Interoperability tsets - but also for
some conformance tests (e.g. when checking if the right message is passed
from MSH candidate, to its app.) We need to explore this.

MIKE: Hopefully we can achieve both interoperability and conformance goals with a good
trace format.

<Jacques> if we can't, no big deal. But that does not seem too much of a burden
on the AT-driver: when receiving a message, the candidate MSH will callback its
application layer (here the AT-driver, or rather its MSH adapter). Or, it could be
the AT-driver is initiating the "get", reading on a queue. 
In any case, the ebXML header data is likely to be transmitted not in a header-like 
XML doc, but rather as a list of "properties", something JMS-like. 
So the AT-driver could either work on this data as is (as a list of header values) when 
verifying it got the expected message for this testStep, or when checking
test conditions of a TestStep.  Or, these header values could cast back into a
well-formed ebXML header-like doc (the "GetHeader" trace), so that the same 
TestStep expressions / condition statement we use on MT traces, 
could be used also on AT "traces". Maybe Steve Yung (our JMS expert) could have 
a closer look at defining the interface between the MSH adapter and the AT-driver?


[C6] I am a little unclear whether the way we associate a trace with a test, and with
the input messages for this test, is sufficient or not.
When a message sent by the candidate MSH is received by the MT-driver
(as in r1.1.6 step S2), I'd like to be convinced we are going to use it in the
"right test": because I am assuming that the tests may not always be done in a 
purely sequential way. Possible following scenarios should be supported
(maybe they are, but not sure):

(sc1)- the same test case is repeated several times, and we may want to distinguish
the test data ( and test condition) for each. I.e. the input message crafted by
each instance of the same test will be different - e.g. using different messageID -
and the test condition, as specified by the parseTrace element, will target the
corresponding message. I guess that can be done by adding messageID condition
(in XPath or Schematron rule context) in order to distinguish the right trace,
in addition to the Test Req#.

MIKE: This is a good point.  My answer is that I envision driving the tests from the 
ebXMLTestRequirements.xml 
file. The way that I would see us providing "unambiguous" identification of TestCase and TestStep
data would be through the "inheritence" of the properties of the TestRequirement defined in 
ebXMLTestRequirements.xml  That way, defining the "context" of a TestStep would require the
XPath expression to include something like:

/TestTrace/TestCase [@mainRequirementReferenceId='1.1' and 
@requirementReferenceId='1.1.6']/TestStep[@id='s2']/GetMessagePackage

( the MT and AT would have to provide the "mainRequirementReferenceId" as an attribute 
in the trace to further differentiate
TestSteps and/or TestCases ).  This requires a fundamental ability of the TestCase to "extend" 
the TestRequirement class and
inherit its properties and methods.  I am hoping that we could do that with our test harness.

<Jacques> this is this "intervention" of the test driver to add requirementReferenceId data 
and testStep data in the Trace, that I am concerned with, because it relies on  a strong
"synchronization" assumption (that you were concerned with earlier): 
when generating a Trace element ("TestCase" elt), the test driver
has to make sure that the message data it "wraps" into the TestCase and TestStep elements 
of the trace, corresponds to the expected test case (requirementReferenceId / TestStep)...
I would rather have the test driver (AT or MT) generate the trace *exclusively* from the message
data it receives, without adding anything that comes from the Test Case definitions.
It may well be that we don't even need "test case" info in that trace...
provided we can correlate it with the TestStep we are currently executing,
based for example on the MessageID field, or even Service/Action fields.
If we can achieve that, then it is not so critical which Test Case we are currently processing
when we generate the trace (i.e. when the driver receives a message).


(sc2)- Also, it may happen that for some reason, the message 
currently received by the MT-driver is not related to the "current" test, 
but to a previous one (e.g. due to resendings in case of retries, etc.)

MIKE:  This is more complex, in that the MT is receiving a message, possibly asynchronously, 
or from a previous test, and since it is not getting its context information through 
the driver XML, it would need some kind of "external context" information.  
I had previously suggested using the message itself ( either
as an additional payload, or perhaps stored elsewhere in the message ) to provide this context.  
This would solve the "synchronization" problem between MT and AT drivers, 
and provide the necessary context to the trace generator. 

<Jacques> looks like we have been thinking to the same thing...
I would definitely favor NOT requiring the synchronization assumption,
as I am sure that would make our tests very sensitive to operational glitches,
and unreliable in the end. So we may need to put "test case correlation" data in
our messages. Would avoid the payload. MessageID and RefToMessageID might be
a solution. Service/Action also - but some tests require special setting
of these (but in turn these tests may not need correlation...) so we need to look at
the detail of each case. Good thing is that we do not need a "universal" solution:
The test case verification condition ("parseTrace" elt) is parameterized for 
each correlation case.

So these scenarios require unambiguous correlation of input messages with output trace.
Also: the trace is supposed to contain the test req (e.g. r1.1.6, and possibly the step # e.g. S2)
how/when are these added to the trace?


 If this info comes from the TestCase doc that
is fed to the MT-driver, then that puts a strong requirement on the order of
messages received, that should match the order the MT-driver is getting test case data.
Could we have the trace built only from message data? Can we then introduce test req references
- e.g. r1.1.6 - in this mesge data? (e.g. in Service/Action field, or PartyID...)

MIKE: I reference my statement above, that such information could be added via inheritence of  TestRequirement
properties insufficient for "asynchronous responses" or... by passing this information through the actual 
ebXML message as an additional payload or parameter in the message.


[C7]
Here is the operational behavior of the test execution I would expect.
Let me know if that's what you had in mind:


(a)- The testbed (AT-driver + MT-driver) is being fed with a TestSuite doc, which is
a sequence of TestCases. Each TestCase describes a sequence of TestSteps.
Some TestSteps are to be executed by the AT-driver, some by the MT-driver.
Normally, everything will execute in the right order. But we do not assume
a perfectly sequential order of the tests. (e.g. some messages may arrive in
different order, and we do not assume a notification of end of Test #123 before
starting Test#124.)

MIKE: My initial thought is that the testbed is fed from TestRequirements.doc
( via an XML parse of the requiremements, and the Condition Clause(s) and Assertion ).
Corresponding to each Condition and Assertion is a single TestCase, with a matching requirementReferenceId.
Each TestCase will result in a Pass/Fail (True/False) which will ultimately determine whether each
SemanticRequirement, and ultimately every TestRequirement evaluates to a Pass/Fail.  The 
"Clause" logic of the SemanticRequirement allows for some complex nesting of boolean tests.

MATT: I also did a small proof of concept on this concept prior to the introduction of our
clause syntax.  A side effect of driving tests from this high level is that reports can
be automatically generated regardless of the code used to implement any given test case.

<Jacques> Driving the tests from top level (TestRequirements.doc) sounds good. 
But I am not sure that means feeding the TestRequirements doc directly to the executing
driver components (AT and MT), though that might work under some assumptions.
The alternative I had in mind assumes a higher-level "superviser" test component
that would:
(1) parse the TestRequirements.doc, 
(2) iterate through all test req items assumed by a particular test plan (e.g. core conformance), 
(3)break down each test case into test steps for the AT-driver, and steps for the MT-driver.
(Note: at this stage, some Condition / Assertion may play a role in avoiding unnecessary 
test step generation?)
(4) each test driver is given (somehow) the resulting "flat" list of test steps 
(no grouping by test case needed - except case ID inside the test case itself -, 
and no synchronization needed between the AT and MT besides the producer-consumer roles 
they assume around the trace they produce to eachother, if we agree on "asynchronous design" above)
(5) each test driver involved produces its own "result" trace - a list of fail/pass elements.
(6) these result traces are passed (somehow) to the "superviser" component that generates the
ultimate report.
Note1: we could manage to get all traces produced on one side (e.g. MT), if all test cases
manage to end up with a step on MT side (e.g. AT sends back its results to MT...)
Note2: the superviser functions could then be merged somehow with one of the
test drivers. But this superviser could also be the front-side of the testbed, 
e.g. user-facing Web app., that can configure remotely both AT and MT.
Note3: (this "superviser" component would be reused  for Interop tests...)

*MIKE: I agree witih this scenario.  I would also go a step further, and say that we may be able to simplify the
<Clause/> logic  the ebXMLTestRequirements.xsd schema  to a simpler case of:

IF (A AND B AND C) then D      with A,B and C being <Condition/>s and D an <Assertion/>

*MIKE: I see nothing in ebXML MS testing ( or ebXML Registry testing ) that requires any more complexity in defining tests.
If we use this approach, then Conditions ( and their correspondiong TestCases ) are processed sequentially as 
PASS/FAIL, and based on their collective boolean value, the Assertion ( and its corresponding TestCase)  is executed.
If the MT and AT driver have access to the trace log at run time, then we could conceivably evaluate each TestStep 
on-the-fly and determine whether to proceed to the next TestStep, or TestCase.
In addition, system errors can be trapped and execution terminated based upon some inherent intelligence in the MT or AT driver ( timeouts... validation errors... ).

*MIKE: Also, one advantage of using the ebXMLTestRequirements.xml file as input to the high-level "supervisor" driver
 would be the "passing on" of the actual TestRequirement metadata to the MT and AT
driver, such as TestRequirement name, specificationReference, ID, functionalType, messageLayer...etc..so that the 
trace ( and the subsequence conformance report ) is a full, informative report that points an implementer to exactly where
in the spec this test originated, where it failed, in what message layer it failed, what TestStep failed..etc..

*MIKE: What this means is, the ebXMLTestRequirements.xmll file can be used as a high-level driver file used to
determine test sequence, as well as provide test metadata to be used later in the test trace, and ultimately in the
conformance/interoperability test report.
The ebXMlTestSuite.xml file contains all of the TestCase and TestStep xml, not necessarily in any particular order, since there 
is a 1-to1 mapping ( via ID ) of TestCase to a Condition or Assertion in the TestRequirements file.  Each TestCase is loaded as 
it is called based upon ID.  Its TestSteps are run, and a trace is generated.  The MT or AT driver then evaluate that trace for 
a PASS/FAIL result, and determine whether to proceed to the next TestStep, proceed to the next TestRequirement ( and its
matching TestCase ) or stop execution based on a system error or a Condition/Assertion PASS/FAIL.

<Jacques2> so the consumer of ebXMLTestRequirements.xmll  would be this "Test superviser".
(How do we specify the level/profiles of conformance requested?
e.g. "core conformance", ""security & reliability", etc.).
Then,  AT and MT drivers are given their "test plan" in form of IDs of TestCases they should execute,
right? From these IDs, they follow the TestCase details (read from the global ebXMlTestSuite.xml file)
and pick the steps that are relevant to them (either AT steps, or MT steps).
Is that how you see it?
</Jacques2> 

**MIKE: Right now, we have (3) seperate test requirements files for leve1, level2 and level3 ebXML MS conformance testing. The
"test supervisor" would use one of these, depending on whichi level of testing was being done.
If we wished to "profile" we could add additional metadata into each of these files ( e.g. a "profileName" attribute for each
TestRequirement.  The "test supervisor" would parse the requirements file, and filter which requirements it would run
based upon profile.  We could additionally filter on any metadata in the requirements file ( functional area... such as "packaging", 
"security", "quality of service"...etc )

**MIKE: One interesting possibility would also be providing a "CPPA Matrix" of TRUE/FALSE attributes for a TestRequirement that could be used to
filter which tests to run based upon a particular CPPA profile.

(b)- any message received (e.g. by MT-driver) is appended to a Trace file, 
in the format described by the TestTrace schema, in the order of reception.
The "requirementReferenceID" attribute is set by using the test case # (e.g. 1.1.6)
that is reported in some field of the message. (Same for the test case step?)  

MIKE: Based on your previous observation of possible ambiguities, I would also include the "mainRequirementReferenceId" 
(always unique ) to further disambiguate "re-used" TestCases and TestSteps


(c)- the "ParseTrace" command (e.g. as described in TestStep S2 of 1.1.6) 
will be passed to the MT-driver, at a time that may not match exactly the
time the test is executed (message sent).
The command has enough data to discriminate the right trace subset (the Test # e.g. 1.1.6, 
and possibly the expected MessageID).

MIKE: Yes.  As long as there is a unique context identifier in the form of mainTestRequirementId, 
combined with requirementReferenceId ( however we get this information... most likely through
message passing ), it does not matter where in the trace this message appears.

<Jacques> agree.


(d)- the MT-driver maintains a queue of "ParseTrace" orders, but does not trust the next
message received to be for the next ParseTrace order - depends on when the associated 
message is received,
due to network latency, "noise" messages, overlapping tests, etc... 
So the MT-driver is periodically watching the
trace file, and tries to identify messages in it that match its next parseTrace order. 
If a parseTrace order can be executed on an instance of the trace, it executes the 
related test condition.

MIKE: This sounds like a good way to solve the "latency" problem that can be introduced 
in so many ways.

So the interest of this way to operate, is that the trace can be parsed/checked
at any time, in a de-coupled way with the actual execution of the tests.

MIKE: Yes this can allow a de-coupling of the execution of the tests from the parsing
of the trace.  However, we need to address the driving of the test suite from the
ebXMLRequirements.xml document.  The purpose of using the requirements document as the driver
is to allow us to make smart evaluations of what may be some rather complex Conditions
and Assertions making up a testing requirement.. and to point to where in the boolean evaluation
of these Assertions that things may go wrong. Did they occur in a pre-condition?  Was that
pre-condition a requirement? or optional?   Failure of a required Condition or Assertion ultimately
means failure of the TestRequirement. Following the logic of the SemanticRequirement Clause allows us
to logically evaluate TestCases in a meaningful way.

<Jacques> yes But if I recall, some Condition / Assertion can be tested 
in advance - e.g. based on CPA -, some only when the test step is in progress,
e.g. COndition is about presence of some header element in trace. So we
may not always take advantage of this to avoid useless tests.

*MIKE: Yes, some of these Conditions will be determined ahead of time, others at 
runtime.  Please see my earlier *MIKE: comment describing how I now see the TestRequirements document
used to sequence testing and provide test documentation. Also I see Conditions and Assertions expressed in 
a much simpler way that would make parsing of the trace and test reporting much simpler.

MIKE: So while we have all of the ParseTrace commands queued up for boolean evaluation of a particular SemanticRequirement,
we need the Conditional Clause ( if present ) to create a logical grouping and evaluation of ParseTrace commands 
into a meaningful test evaluation.
This may require a "re-parse" or "re-traversal" of the TestRequirements DOM in order to meaningfully evaluate
the queued ParseTrace commands.

*MIKE: I have re-thought this, and no longer see the Condition <Clause> structure as complex, and believe that it can be
expressed in a much simpler fasion that would not require "re-parsing" of the TestRequirements document.







[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC