Dr.
Serm.
You may have wrong
URL.
Check
please.
1.
TSS :
http://msi.postech.ac.kr/korbit
(or http://www.korbit.org )
2.
INNO MSH:
http://pado.innodigital.co.kr:17777/ebms/admin/UI
3.
KTnet MSH :
http://203.242.200.22/ebxml
Sorry. I am busy for my
course work. So I will send my concrete answers and comments to you later. In
the first place, I wrote simple comments bellow.
In
the IIC testbed framework,
1.
There
is a need for the standard API between the test service and the MSH. It is
hard to make the generic test service for too many MSHs.
<serm>
I totally understand this. I think IIC should make standard API
definitions available which sufficiently address its requirements.
</serm>
MIKE: Currently, the Test Service is defined without the
use of an API. For example, the "Dummy" action as described in
section 3.2.4.2.2 provides the description of how the
Dummy
action creates a response message, providing details describing
what response message content must be provided by the Test
Service,
including MessageId and RefToMessageId.
ConversationId is not specified, although it is required, along with other
required MessageHeader content.
I agree that providing an API definition would be a clearer way
of defining the Test Service rather than the current text.
2.
In
the service mode (interoperability testing), there is a need for the API
between the test service and the test driver. The common MSH doesn¡¯t the fully
ebXML message to the back application (Test Service), so the test service
doesn¡¯t send the fully information for the testing validation to the test
driver.
<serm>
In your statement "......., so the test service doesn't send full info for
testing validation to the test driver", do you mean without direct connection
b/w the test drive and the test service, the test service cannot send full
information for testing validation to the test driver? Could you describe to
me (in a good detail I am not an MSH expert) an example of such situation. Or
you just mean that the IIC needs to define standard interface b/w the Test
Driver and Test Service?
</serm>
<woo>
In our testbed, all validation are performed at the Test Driver. So, the test
service must send received massage info.(Header Info and Payload) to the test
driver to validate test case. But any MSH sends only the payload except for
the Header info. In this case, the least requirements of information are
defined. If any MSH cannot satisfy the requirements of the specific test case,
the MSH cannot perform this test case. Also, this information is sent to the
test driver through stated protocol layer and message schema because of
the test driver remote from the test service.
</woo>
MIKE: Is this true, that all MSH implementations only
send payload content to the Test Service, and not ebXML Header data?
3.
In
the connection mode (Conformance testing), there is a need for the function of
service mode like an independent communication channel. We must guarantee the
test service normally performs at the failed testing.
<serm>
I agree with this and I think I could think of example cases, but I would like
to hear from you the experience of example cases to indicate why this is
important. I proposed this to the IIC before but they didn't like that b/c
they said that it necessitated another communication channel which may not be
available from the organization's firewall perspective. With more examples, I
can make the case stronger. I also think when organization is doing test, they
should be able to open more ports.
For the your comment in #2 and #3, I had earlier proposed to the IIC to define
a standard web service interfaces for use in both service and connection
modes. Again help me with examples.
</serm>
<woo>
This needs some time. Wait plz. I am thinking about this.
</woo>
MIKE: It has been suggested by Serm and others that a different
communication channel, other than ebXML MS be used for communication between
Test Driver and Test Service when in "connection mode". I agree that
using the same communcation channel as that being tested for Test Driver/Test
Service communication may not be the way to
communicate.
In
the test suite schema,
1.
In
the test suite, instance values (like endpoint) and general values and
descriptions (like test step) are mixed. The instance value must be separated
from the test suite schema.
<serm>Please
provide me with example. I donot quite understand
here.</serm>
<Woo>Sorry.
This was wrong indication. All instance values can be presented by X-Path.
</Woo>
MIKE: XPath expressions contain the references to configuration
values. I would agree, however with Tim Sakach that configuration data
should be removed from the TestSuite schema, and defined as a separate
schema... since configuration will vary, and is not really part of a Test
Suite.
2.
There
is a need for the test step number or ID.
MIKE: Agreed.
<serm>Okay.
Can you generate ones during the run time? Or it needs static reference?
</serm>
<Woo>No.
we do not generate the number of test step. See the IIC framework documents
(07
March,2003),
at the line 1220, ¡®testStepContext¡¯ needs the step
number.
</woo>
MIKE:
Agreed
3.
There
is a need for the definite operation of test step. Except the GET and PUT
message any other operation is not exist in the current test suite
schema.
<serm>
You can propose other operations that you think are important and provide
example use cases.</serm>
<woo>
For the reliable messaging, ¡®BLOCKING message¡¯ and ¡®RELEASE message¡¯ must be
definite. </woo>
MIKE: This may be possible, but I would need to see a use case
where we need
this.
4.
The
interoperability test suite needs to be reconsidered. The current schema is
same as the conformance test suite. But in practically, the test step and
assertion are difference between the conformance and
Interoperability.
MIKE: I agree that the some conformance
and interoperability test cases look identical. The only real difference
is that one test case (conformance) is based upon specification requirements,
whereas another test case(interop) is based solely on whatever requirements
are defined as "interoperability", and are not based upon the
specification. But the TestSuite schema is identical
in both cases. I do not necessarily see this as a problem..
I see the problem lying in very similar-looking test requirements. We
may wish to better define what is "interoperabilitiy", and what are those
requirements. . We have accurately defined what constitutes
"conformance"
.
<serm>
This is not obvious to me as I am not familiar with ebMS test suite. Could you
give me example cases so that I can forwarded it to Mike Kass.
</serm>
<woo>
This needs some time. Wait plz. I am thinking about this.
</woo>
MIKE: An example would be the
conformance(#14) and interoperability (#1.2) test cases demonstrated at
XML 2003, where a conformance test verifying a matching payload and manifest
reference = conformance to a single requirement in the specification, and a
test where the same payload reflected back to the test driver =
interoperability. Woo's point that both test cases test the same
thing is valid, and perhaps we have "overspecified" the interoperability
requirement for that test, and should only test that the payload is identical
for interoperability, and disregard whether the manifest/payload id's
match....