OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: [ebxml-iic] 9/222/002: IIC Test Case,Implem Issues and Deployment Review


1. Comments on your test case suggestions, Jacques.
2.  A few brief suggestions on the implementation issues, Michael.
3.  A few potential variables for the deployment template, Pete.
 
Thanks.
Monica
 

-----Original Message-----
From: Jacques Durand [mailto:JDurand@fsw.fujitsu.com]
Sent: Saturday, September 21, 2002 4:02 PM
To: 'ebxml-iic@lists.oasis-open.org'
Subject: [ebxml-iic] Confirm IIC Conf Call Monday,Sept 23th, 10am PT



All: 

Sorry for late notice... 
As Mike said, 

HOST: NIST (Michael Kass) 
CALL DATE:         SEP-23-2002  (Monday) 
CALL TIME:         01:00 PM EASTERN TIME , 10 AM PACIFIC TIME 
DURATION:              1 hr 
USA Toll Free Number: 877-917-7133 
USA Toll Number: +1-212-287-1619 
PASSCODE: 10968 
NOTE: you may have to give Mike name to the operator. 

Agenda: 

1. test Case material and MS COnformance suite: 
- finalizing : remaining issues of the test material 
(test steps and their status/type/errors, test outcomes...) 
- status on: conformance profiles (Jeff) 

2. Interoperability Test suite: 
- status on: detail of current test cases candidates for basic interop
profile(s) 
-  comparison with UCC/DGI 
- ECOM completed their first ebXML interop tests (5 vendors from ASIA) 

3. Deployment templates: 
- status on: template doc from Pete. 
- input from EAN (Thomas?) 

Cheers, 

Jacques 

Proposed closure on 7 remaining issues on Test Case material:
----------------------------------------------------------


1. CPA: 

Proposal:
-------- 
let us just use a list of name/value pairs for defining
each CPA, in our first version of MS Test Cases.
A small set of CPAs instances is sufficient, each with a defined CPAId. 
We refer to these, from each test case (message expressions), by the CPAId.
When we need to name a CPA attribute in "message expression", 
we use one of these CPA attribute "names" with $ notation, and "CPA_" as prefix.
E.g.: "Signature" will be referred to as : $CPA_Signature.

[mm1: Differentiate deployment, test and 'other' profiles to alleviate confusion.]

Comments:
---------
we can just provide a list CPA parameters, as an abstract list of
name / values (we started doing that in current draft, 
see MS COnformance 5.2.2 "CPA data" section.)
Any more specific format (e.g. XML doc) is not essential here.
(Can be defined later, mostly for automation.)

[mm1: This could simplify things.]

2. Message material: 

Proposal:
---------
identify a small set of predefined mesge envelopes - as we have now. 
(see defined in 5.2.5). We can name them and refer to them in the "Template"
column, as we did for header tmpl and payload tmpl.

[mm1: Differentiate test and deployment templates, their function and use to alleviate confusion.]


Comment:
--------
Jeff T. will refine this, but for now it does not hurt to "abstract" the
mesg envelope building process.
These pre-build message envelopes represent the outcome of 
a MIME env building expression, as Jeff will come-up with, so later will be 
replaced by such an expression.

[mm1: Yes.]

3. Verification step:

Proposal:
---------

Not all testCases in latest version seem to have a specific "Verification" step.
For the sake of standardization, we need to: all steps will have a name,
either GetMessage, PutMessage or Verification, (and maybe also "preCOndition"?
see below)

[mm1: I agree as long as we have a discrete 'view' of a test case to a specific requirements.  Later, when we aggregate, 
as we know will happen, the verification may not 'conclude' until after several cases (such as potentially those
related to the test service.  Comments welcome here, as I've not thought this through.]

4. General description of a test case: some suggestions, 
illustrated on TestCase:id:2 

Proposal: 
---------

to improve this test case description, and some others in the same way.

Comment:
--------

- it is simpler to send to "Reflector", than to Initiator, as Initiator 
requires to fully specify the embedded message to be re-sent, 
as well as the "carrier" message. (so in general "Initiator" should be avoided 
when not needed).

[mm1: Suggest we should support on the long-term for greater rigor of testing.  Comments welcome.  In 
addition, on the suggestion below, the expressions should be as standardized as possible without being 
constrictive and capable of supporting additional complexity as we progress.  This is almost starting to look
like naming and design rules, and a vocabulary for test framework.]

- In TestCase:id:2, We would send a message with 1 or more payloads, 
and check that when "reflected", this msge is well formed according to the test.
-PutMessage step: same as current, but with "Reflector" instead. 
-GetMessage step: would only express a filter to correlate with the
previous message:
(/SOAP:Envelope/SOAP:Header/eb:MessageHeader/eb:ConversationId==’$ConversationId’) and 
(/SOAP:Envelope/SOAP:Header/eb:MessageHeader/eb:MessageData/eb:RefToMessageId==’$MessageId’ )
-Verification step: would express  the condition "SOAP env is in root part". 
Here, we can try use a formal notation, like you did in testcase:id:5,
or like: MIME-Envelope.part(1) == /SOAP:Envelope/*. 
(the MIME envelope is an object that we can manipulate for conditions, in same
way as we can manipulate for construction.) 
For example, in testcase:id:5: you have the condition:
($MimeMessageStart !== ’’) We should standardize on the manipulation of MIME env/parts (should we use an XPath-like notation for
MIME elements? otherwise would we need too many identifiers such as "MimeMessageStart" ?)
Let us see how that works in other test cases.



5. Expressing Verification COnditions: 

Proposal:
--------

In case it is difficult to "formally" express some conditions, 
bin "test message expressions": For test cases where that is really tricky,
then let us use a precise English text description for now.
We'll refine this in a next pass.

[mm1: See comment above in (4) - even with plain English we should strive for some type of consistency and 'rules.']

6. About some steps that just check if some optional condition
is satisfied (like in testcase:id:3 and testcase:id:10):

Proposal:
---------

We need to give a name to these steps. 
The closest name I can imagine to what the step is doing,  could be "precondition" ?

[mm1: This seems more like a 'type' of step.  Should we also consider a unique name so they can be reused?  Like,
a library of reusable 'sub-steps'?

Comment:
--------

So we would have 4 kinds of steps:
- GetMessage
- PutMessage
- Precondition
- Verification
Then I propose below (section 7) that errors related to such steps
is really about pre-condition failure (no need for "FatalOption"). 


7. About error types: (minor point) We might simplify further,
by not distinguishing "FatalOption" from others: 

Proposal:
---------

we do not need FatalOption: just replace by preCondition.nonApplicable,
Also We need refine the preCondition failure for other steps (specify
"system" vs. "nonApplicable")

[mm1: Agree - definition should be clear, otherwise the test could be perceived as ambiguous (verification of results).]

Comment:
--------

If we agree that the outcome of a test is either:
(a)- failure because of testbed technical problem (system).
(b)- failure because the logical precondition of a test could not 
be realized for whatever reason (other than system failure).
(c)- failure because the test condition (assertion) was not verified
under normal conditions.
Then it seems we have all we need: if the step is about "optional"
condition, then it should really be a precondition failure:
Let us take two examples: 
- in testcase:id:3, the step that generates "FatalOption" error is
really about a precondition to the test case: if it fails, that means the 
test is non-applicable. 
- in testcase:id:10, there are two steps that check the presence of
the attributes we want to compare. If these attributes are not there
(steps 3 and/or 4 fail) then it is a pre-condition failure: the test is
not applicable. (we should consolidate these 2 steps). 
- So it seems that we could just use: FatalPrecondition.nonApplicable
instead of all these FatalOption.

[mm1: The only potential item to discuss is if an 'optional'  step is further restricted by a profile
by a specific industry.  In that case, the 'failure' would not be a FatalPrecondition.nonApplicable.  However,
this is more than just a discussion about the assumptions here.] 

- For the "PutMessage" steps: a failure could mean either (1) sending was
unsuccessful, (2) message material needed to craft the message was not available.
In such cases, I would say this is a FatalPrecondition.system failure.
- For the GetMessage steps: a failure to receive in time the right message, 
could be again a FatalPrecondition.system. (no matter who is the culprit:
the testbed or the MSH). Of course, in case the "filter" associated
with GetMessage goes beyond correlating the right message (i.e. RefToMesgId +
conversationID) and has some additional condition on some option
that could fail if the right message does not have such option, then
that would be a  FatalPrecondition.nonApplicable. But if
we want really to distinguish, we would then create two steps, like
you did in testcase:id:3: 
- the GetMessage would fail with FatalPrecondition.system
- the "precondition" step would fail with FatalPrecondition.nonApplicable



Attachment: ebXML_MSTestCaseImplementationIssues_mm1_092202.doc
Description: ebXML_MSTestCaseImplementationIssues_mm1_092202.doc

Attachment: ebMS_Deplmt_Guide_Template_20020822_jd1_mm1_Comments_092202.doc
Description: ebMS_Deplmt_Guide_Template_20020822_jd1_mm1_Comments_092202.doc



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC