OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Subject: [ebxml-iic] RE: sample 3 test cases

At 11:58 AM 10/2/2002 -0700, Jacques Durand wrote:
Mike, in-line.
-----Original Message-----
From: Michael Kass [mailto:michael.kass@nist.gov]
Sent: Tuesday, October 01, 2002 8:16 PM
To: Jacques Durand
Cc: ebxml-iic@lists.oasis-open.org
Subject: RE: sample 3 test cases

At 04:38 PM 10/1/2002 -0700, Jacques Durand wrote:
in-line. ([Jacques2]
   This "timeout period" is something that we need to integrate into our
Test Driver.  I was wondering if the CPA as defined would provide enough
information to the Test Driver ( assuming that it parses some "base" CPA
( or "mini-cpa" for configuration ) to determine what that "timeout period" should be before
checking the message queue ( or some persistent storage structure ).
   Will the "base CPA" contain all of the required information for the Test Driver
to compute its own "timeout period" before checking for received messages, or
will we have to assign some arbitrary value? 
[Jacques Durand]  I think it is better to have that as a testcase-specific configuration parameter...
this is really a test driver control issue, only meaningful to the test driver, and outside the scope of the CPA .
Maybe in the "operating party" column, we could specify some config parameters for the
operator of this test step, in addition to the operator name (here "TestDriver"), e.g. "Timeout=60" (in seconds).
The default value could be given at the beginning of the test suite, in configuration parameters: e.g. "DefaultTimeout=120".

[MIKE2] - It looks like there are many possible places to define test case specific parameters.  They could be defined
as attributes in the <TestCase> column ( or element for implementation ), the <TestStep> column/element, or the <PutMessage> or
<GetMessage> operations/elements.  Where to put them would depend upon the granularity at which you wish to define
such parameters ( parameter for the whole test case? for a single test step?  for a single operation? )   I would think that
Test Driver configuration would be fairly static for test cases, and for that reason I would suggest making configuration parameters
attributes for the <TestCase> column/element.

[Jacques2] would rule out using the "test step" col, as it is just a logical id for steps. Either "TestCase" col  or "[Operating] party" col, which would treat this info as config parameter for the operator (test driver). Granularity: would likely need to be at step level, e.g. your GetMessage step in Case #74 will need to wait until its timeout is over in order to count all duplicate messages - so this timeout limit will be set based on #retries and retryInterval. But that only makes sense for this step...  not for others in the same Case.
 We need also to be clear about the meaning of "reaching" the timeout: for a duplicate -counting step like in
#74, this is a normal termination: no error (the timeout IS the step completion signal) but for other steps, if completion is not reached at timeout, thats an error. We may need to specify when the timeout is actually used to terminate the step, e.g. timeout_isfailure='false' (true is default).

[MIKE3] - With the idea of assigning configuration parameters at the "step" level, we can do that in both the abstract and real test <TestStep>s.  <TestStep> ( a column name in the abstract test descriptions 0 is a container element in XML.  We could then add additional parameters like: <TestStep id='1' timeout='60'/>... or <TestStep id='2' timeout=120' represented tabularly in the "Test Step" column of the abstract test suite as id='1' timeout='120'   We could have a default value of, say  '10' for all step timeouts, that would be overridden by an explicit timeout declaration in the <TestStep>.

[Jacques3]  sounds fine. Believe we need to set the defaults at test suite level - (little value doing this at Test Case level)
So could we have a special container that identifies a test Suite as:
 (1) a set of initial and default config parameters (e.g. default timeout for all "Put" steps, amybe a different one for "Get" steps)
(2) a set of predefined CPAs used by the Test Cases. (could be described as plain list of name-value pairs using yours CPA attribute names, abstracting CPA mark-up for now.)
(3) message material (header templates, payloads...)
(4) a list of references to actual Test Cases that are part of this test suite (something close to the "master" file of Matt intended to define conformance levels?, Or, this master file could actually be the ultimate Test Suite container.)

[MIKE4] - All of this will exist at run time.  Also including the Test Requirements XML document, which provides metadata information for each Test Case
( specification reference, mandatory/optional requirement descriptor, text description of requirement..etc... )

 1)   Regarding your other comments for the 3 abstract tests.. I agree that <Condition>
should be changed to <Precondition>.  It has a more direct meaning in a testing sense.

2)   I will fix the typos on the "quotes"   

3)   Regarding either :
  (a) Using predifined CPAs for the "configurator" action, or 
  (b) using CPA "templates" and manipulating them like any other message content...

I would favor (b) simply for the expediency.  Or at least, I think that we should leave that possibility open in our implementation
design.  That way, if the number of CPA templates becomes cumbersome, we could treat the CPA
as just another payload, and manipulate that XML payload template content the same way we would manipulate
an XML payload template for say  ... ebXML Registry testing.  

[Jacques ]  I would also try to avoid using Configurator action.... . (Configurator action assumes the MSH is
capable to dynamically handle new CPAs, which may complicate things API-wise, may not be true of all MSH, etc.
But we may still assume that all the CPAs we need are in reasonable number and pre-installed / accessible...
(that would be the simplest solution)
Only in case there are too many of these, (or too many combinations of
CPA attributes to consider in our test cases) we will have to specify how to generate
non-preinstalled CPAs from a template - and that could just be an XPath assignment in a sub-operation, like for header manipulations. In that case only, we would need the Configurator, to deploy on a the MSH local to this Test Service.
But I'd say we might not need do that if we don't need more than 20 or 30 CPAs, which is still a reasonable
number of predefined CPAs...
[MIKE2] -  So what we are talking about as far as configurating an MSH using CPAs is a "startup" mode, in which the
candidate MSH starts up, reads a CPA ( or CPA-like config file )  and configures itself.  We are assuming no "dynamic" ability of an MSH to
alter its configuration once started.  Which means that all tests that must be run must assume that particular CPA configuration.
[Jacques2] well, in the simplest solution mentioned above, an MSH should be able to access/use
several CPAs at any time, based on the CPAId in messages.
All these CPAs would be set (and possibly known from MSH) prior to running the test suite - may need a pre-configuration phase, but that really depends on the MSH implementation (instead, CPAs could be preinstalled in a registry, and MSH only needs to access them on demand, at run-time, and cache them).
At test suite level,  we only need to specify a list of these CPAs (or CPAIds) needed by our test suite. Each test case will typically use no more than one CPA, but if we need to switch in the middle, that just means typically sending a message (m1, CPAId_1) to  the Initiator action, such that this action will generate a message (m2, CPAId_2).
We only need to change the CPAId in the message, for the MSH to know what to do.
A CPA would never change, once "installed" and referenced by a CPAId.
So that is the least constraining way for the MSH... in case we really need to dynamically craft and add new CPAs, the COnfigurator service action would be needed. But that makes more assumptions on the ability of the MSH. I don't believe we need that for now...

[MIKE3] - I see. Well, this will work.  Basically we are saying "here is the CPAId", we both ( TestDriver and Candidate MSH) know what it means, but how we each describe it to our MSH is different.  The key here will be absolute understanding what each CPAId means to both parties. Otherwise, conformance testing falls apart.  That is the benefit of having everyone work from the same file syntax, whether it is a CPA or "MiniCPA".  Everyone agrees. There is no ambiguity.
  Test wise, as long as both the Test Driver and candidate MSH both know that they are working from the same CPA, any discrepencies should be caught by good test writing... i.e.. if my Message Expression is  <Precondition>    $CPA_DuplicateElimination=='true'  </Precondition>  but the current CPA has DuplicationElimination set to false' then the <Precondition> will throw a "FatalPrecondition" exception, which willl  trigger an end to the <TestStep>.  The test cannot proceed because of an incompatability between the CPA and the test <Precondition>. This will work.
[Jacques3]  these $CPA_parameters like $CPA_DuplicateElimination, do they represent message element or CPA elements?
I  believe its  message element, right? In that case we are in sync. Because I think there is no need to test the value of CPA elements, if we assume the CPAs are well known and identified by the CPAId ref.

[MIKE4] - But what if the test writer chose the wrong CPA_Id?  Shouldn't we check to see that the proper <Precondition> has been met before we assume that we can test the
<ConformanceCondition>?  We could do that fairly easily by dynamically changing the $CPA_... parameters on the Test Driver side, each time that a new CPA_Id is introduced.

$CPA_DuplicateElimination would be a parameter stored in the Test Driver ( when CPA configuration takes place )  and available to the XPath processor to determine if a <Precondition> can be met prior to evaluating the <ConformanceCondition> for a <TestStep>.... for example, the spec says:

The DuplicateElimination element MUST NOT be present in a message if the CPA has duplicateElimination set to never (see section 6.4.1 and section 6.6 for more details).

So, for a test writer to properly create the abstract and actual XML representation that :

* Assuming we sent a "reflector" message to the candidate party *
1) First finds ( correlates)  the correct message contained in persistent store
2) Test the <Precondition> that "If CPA has duplicateElimination set to 'never' "
3) Test the <ConformanceCondition> that there are no eb:DuplicateElimination elements present in the returned message

We should not assume that the <Precondition> has been met, we should test it. If we are going to dynamically change
CPA's, we should verify whatever CPA preconditions must exist prior to performing our conformance test.  We can do that through
CPA parameter evaluation.

<TestStep id='2' party="TestDriver">
<GetMessage description="Correlate returned message">
/SOAP:Envelope/SOAP:Header/eb:MessageHeader/eb:CPAId==$CPA_Id and/SOAP:Envelope/SOAP:Header/eb:MessageHeader/eb:ConversationId==’$ConversationId’ and /SOAP:Envelope/SOAP:Header/eb:MessageHeader/eb:MessageData/RefToMessageId==’$MessageId
<Precondition description="Test that CPA has DuplicateElimination is set to 'never'">
<ErrorMessage>CPA does not have DuplicateElimination set to 'never'</ErrorMessage>
$CPA_DuplicateElimination == 'never'
<ConformanceCondition description="Check for DuplicateElimination element not present in message">
<ErrorMessage>DuplicateElimination element found in returned message</ErrorMessage>
//eb:DuplicateElimination[count() == 0]

Se we use the test Precondition to verify that we are indeed using a CPA that is set properly, prior to doing the conformance test.  If for some reason
( say a test writer chose CPA_Id=5 instead of CPA_Id=6 ) and CPA_Id 6 does not have DuplicateElimination set to 'never".. the test will catch that,
throw a fatalPrecondition exception, terminate the <TestStep> and terminate the <TestCase> with a status of "untested".

Following our <Precondition>...<ConformanceCondition> pattern that we describe in our abstract test cases makes sense, and also improves
readability of test cases.  Hiding all of the CPA configuration behind a CPA_Id  and assuming that we are using the right one is something we should
avoid when describing tests I believe.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Powered by eList eXpress LLC