OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

ebxml-iic-msg message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: [ebxml-iic-msg] My comments on DGI test suite


Title: My comments on DGI test suite

Team:

Here are my first comments on DGI test suite, and how it could be adapted to ours.
Prakash and Steve may want to iterate on these - note that I have some
questions to DGI: feel free to add yours, and to contact DGI (David Fisher?)


NOTE: at this time I do not publish DGI tests more broadly, but will send it
to any  IIC member (only) who asks for it, as DGI did not explicitly mention its
availability to broader audience in OASIS (at least for now).

Regards,

Jacques



COmments / issues on possible integration of DGI interop tests in IIC test suite:
---------------------------------------------------------------------------------

General Comments:

C1: One recurrent attempt in my comments below, is to figure out what kind
of Test architecture is required for processing the DGI tests (in addition to ours).
This is in line with our approach that the test material we design
(XML docs, test case process, etc) will ultimately be processed and
serves as support for automated testing procedures.
Although we may not implement ourselves such a test architecture, we need
to have [a feasible] design in mind when designing our test material,
and therefore when adapting/integrating DGI tests.

C2: This being said, I believe our test suites should also be written so that 
users who do not have yet an automated test environment can still figure how
to run them "manually".
So there should be enough instruction for this, as in DGI manual. 
By running the tests manually, I mean that even though we describe
our test cases with XML docs (input messages, MSH config, output trace, 
test conditions), users can still make sense of these docs when running 
these tests their own way, and even do without such docs (e.g. without 
a "formal" output trace).
So for example, even if we specify what should be the output trace XML format
for test case "B" in DGI, and specify in an XML format the condition that should 
be satisfied on this trace for the test to be successful, we should also 
describe an alternative way to perform the test condition in case such trace 
has not been generated (e.g., in Test B, a clear statement that the 
"received file in message 123 should be similar to the reference file XYZ"). 

C3: General observation after a first pass on DGI tests: 
- A few tests seem to belong to a Conformance test suite rather than to an 
Interoperability test suite. We'll need to decide on this, as even in such tests,
there may be an "interoperability aspect".
- If we remove Conformance-looking tests from the Interop test, that means we assume
they have been run before... however it could be that we 
still want to run some [parts] of these conformance tests in the actual business
context where interop tests are run, i.e. from a remote driver.
Indeed, results may be different in such context, due precisely to interop
issues. If we do so, these test cases should be quite similar as in Conformance 
test suite.
(should we keep these tests separate from "pure" interop tests? i.e. maybe in a 
separate test suite. In fact, they likely require a different architecture set-up, e.g.
MT-driver like in conformance, to tamper with message data, e.g. to cause
errors to be sent, etc.)


C4: it seems that most DGI tests will only require control at application level 
(i.e. treating the MSH as a black box, and only capturing trace at app level 
using an AT-driver.) That is at least my wish...

	test case input ---> 	AT-driver #1		AT-driver #2 ---> output trace
						|			^
						V			|
					MSH #1	--------->	MSH #2

If we can design our tests to fit this architecture,
that would simplify their automation (e.g. we would not need to require any
specific logging function from MSH implementers, or use special features or device
to sniff on the wire, or to better control the generation of messages, such as "bad" mesg.)
But some tests don't fit this pattern, although it seems these are usually those 
which might rather belong to a COnformance suite (Tests J, K). 
They require some message tampering that may not always be done at app level 
(e.g. for generating errors, or for causing retries.) 
So we may have to assume an architecture that use a lower-level driver: can this
be the "MT-driver" (wire-level) that we assume in conformance tests?

C5: we need very soon to decide if we produce:
(a) a single interoperability test suite, or 
(b) a set of test suites, each addressing a particular "interoperability profile"
(see my previous comments) 
I will assume (b) in the following, but that remains to be decided...

C6: when writing these comments, some questions we need to ask to DGI:
I introduce them with:"QDGI" symbol.


Detailed comments:

Test A: (Certificate exchange)
- more generally, we must assume some way to 
communicate "configuration data" to both MSHs (could include certificate data
if needed). Could be sending an (XML) doc over email... at the very least, 
will contain URLs, partyIDs, CPAIds, and ultimately some CPA data.
I propose we define an XML mark-up for this (does not have to be CPA!)
but must contain relevant subset of CPA data. Note that we may also include
here some "test material" data, such as payloads of messages to be
used by test drivers on each side, either to generate messages
or as reference to compare received messages.
The important thing, is that is is processable.

QDGI: how are the ref files (especially "large" files) communicated to both parties?

Test B: (Simple Data Transfer)
- There are two tests here: HTTP and HTTP/S. If we follow an "interoperability profiles"
approach, these tests will likely belong to two different profiles (and therefore
two different test suites), as an interop profile will (likely) concern only one transport.
- In these tests: success criteria are: (c1) files received, and unchanged, (c2)HTTP
success code in 200 range.
- For checking (c1), clearly we can monitor the output at application level.
i.e., the test architecture is made of two "applicationj test drivers" (AT-drivers)
one on each side. AT-driver #1 will feed MSH #1 with message data to send, 
and AT-driver #2 will get received message data from MSH #2, and produce a trace
(at application level, i.e. in a format we decide, see trace mark-up proposed by
COnformance subteam).
A validation component can then process the trace, check it against reference messages,
and decide if (c1) is OK. For (c2), that probably requires looking into an HTTP log (?).

QDGI: how is the HTTP code check done?

We may just add this in our test case description, and tag it with an indicator
such as "notify operator", ... the processing of
which will mean at best generation of an email to the test operator, 
asking to manually check the HTTP log.

Test C: (large file transfer)
Same comments as for Test B. we may assume the large file has been installed on both sides,
so that a trace containing a received message, can be compared to a local reference file.
Our test case definition should provide enough detail to allow a test validation component
to perform this comparison.

Test D: (data security)
Let us look at test "D1". Success criteria are:
(c1) received message data (payload) is same as expected (reference data),
(c2) signature validated using certificate of Sender.
How to check (c2)? we need to define what is the expected behavior if the signature 
is invalid. In case we need to test "bad" signatures, I believe we do not need to tamper
at transport level: we could reconfigure (through its AT-driver?) the sender MSH 
so that it uses the "wrong" certificate / key.
So our test case should mention somehow which key/certificate should be used,
to generate "bad" (or mismatching) signatures.
The trace produced - in case of "bad" signature - should, I guess, NOT contain
the received message (or should contain an error notification? SO far
the spec assumes the generation of an error message back to sender, or
to ErrorURI. SO the trace on each party may not show anything here,
if we assume / require the use of an Error URI to collect errors (using a 3rd
MSH that will produce its own trace? see comments Test K) .

QDGI: does DGI also test MSH reaction to "bad" (invalid) signatures?
If yes, what is expected behavior from MSH, as produced by current 
implementations?

Test E: (Acks)
That seems to me the kind of test I am not sure whether it rather belongs
to a COnformance test suite (at least for E1, E3).
If the objective is just to check the ability of a remote MSH to generate 
well-formed acks (with RefToMesgId well set, other elements with proper value, 
like partiIDs, etc.), 
then I would say this test has its place in a conformance test suite, not here. 
However, there is clearly an interoperability aspect for Acks: we need to verify
that the sender MSH is able to understand / process Acks sent by the receiver.
So I suggest that:
- we leave the analysis of the ability of an MSH to generate a valid Ack 
(content and well-formedness) to the conformance test suite. So we assume
here that the receiver MSH is able to generate well formed Acks.
- can we get this interop test case to focus only on the observable behavior at 
application level? I am not sure we can, as Acks are typically not visible
at app level (unless we look into an MSH log, but again I would try to avoid this
as this is not easy to automate, and very implementation specific.)

I see two cases where Acks impact the app level: 
(a) when doing reliability messaging, if an Ack is not received / understood this
will  cause a retry, and ultimately if no Acks can be received or properly processed, 
a notification to the sender application - see spec 6.5.4. So here, knowing that the
sender MSH is able to do this notification (which again belong to conformance), 
if NO such notification is observed, that means the sender received and processed
the Acks as expected.
(b) when receiving signed Acks, for the purpose of non-repudiation of receipt - NRR.
 But in case of (b) it is not clear how the business app is using this... as Acks 
normally don't show at application level (i.e. are not notified), the only way for an app to
get NRR  is to access an MSH log... or do I miss something?
So, overall we need to reassess the real value of Test E in our interop suite,
and possibly modify/repackage it, or move part of it into conformance tests. 


Test F: (multiple payloads)

To handle as a variant of Test B. Test cases need to contain explicit references 
to the payload files, so that comparison can be done automatically by
a script / test component. In case of signed payloads, (tests F3, F4) that might
be part of a "secure interop profile" test suite.

QDGI: how do impementers set the "signature" requirement for messages?
Is that an MSH config item - for a particular partyID - or is it controlled
at message-level, when the app is sending?
More generally for our test suites, one question is how often do we allow 
(if we do) an MSH configuration to change during a test suite execution?

Test G: (encrypted file)

For a secure interop profile test suite. Test cases do not essentially
vary from previous ones: criteria is to check received files are same as expected.
However, the expectation is that the encrypted message is nested in another msg,
so the processing on receiver is a little special... (decrypt, then "reparse"
the decrypted content / payload, as if it were a brand new message.)

QDGI: Do they assume all the MSH implementations are able to do that?

Test H (messages services)

Pong received should be observable at application level (in AT-driver trace).

Test J: (reliable messaging)

Part of this test again belongs more to Conformance testing (duplicate elimination,
retry behavior, etc.) For example, DGI test J1 (retries, interval)
does not at all verify interoperability between 2 parties: the sender is only
interacting with DGI testing cgi script, which verifies that the retry
mechanism works well. This is exactly what our conformance "MT-driver" will
do: collect messages sent by the candidate, and cause retries, finally
causing the candidate sender to notify its app of failure.
So I would really put this test in a conformance suite.

So we may again assume that the details of Reliability semantics (logic and timing
of retries, duplicates) is tested before (conformance). 
But on the other hand, we may also want to validate the "interoperable" aspect
of this behavior in a real business set-up, with two user MSHs involved. 
We need to discuss this.
In that case, we could add a "remote" version of our reliability conformance test 
to this interop test suite. (rather, of a simplified version, as we mostly want to
check that the Ack and retry mechanism just interoperate fine between the 2 MSHs, 
not that the semantic details of Reliability are conforming - as this is assumed 
to have been tested before.) In case we add these tests, that could be quite similar 
as in DGI tests J2 and J4 (possibly done in same test case?):
(a)- we drive the "receiver party" MSH with a conformance test driver on sender side,
sending duplicate messages to check that the receiver can eliminate them, 
and will only pass first one to its app, while acknowledging the duplicates.


Test K: (error handling)

DGI assumes an "error system" that will generate bad messages. This seems
again to fall into a conformance test suite, this system playing same role as our
conformance MT-driver. However, there is the "interoperability"
aspect of it: that in the test buisness context, error messages are sent to the right
destination (see "ErrorURI" element, that could be the URI of our test driver, or
the URI of the other party's MSH).

QDGI: is there any 3rd party URI (ErrorURI) and MSH used for collecting errors?

 



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC