OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [oic] Idea about how to create conformance testing documents


Title: RE: [oic] Idea about how to create conformance testing documents

That is a very interesting idea, probably we'll need more than 5 docs
(for example: templates, with/without package), but it looks very nice.

Checking against the XML schema and checking what element/attribute is
already contributed can both be automated.

One catch, as you mentioned, this approach has the drawback of creating
side-effects (extreme example: encryption, if an implementation can't
deal with encryption, it'll be unable to open the file and thus not be
able to check the rest). Then again, it could still be a very good test
document for 95% of the applications.

As for automatically being able to check failure or success, that won't
be always possible (for example, if the application must render a square,
but renders a circle instead...), so I assume we'll always have human
checks at some extent.

Only my 2 cents, I'm very interested about what the other members think.



Best regards,

Bart


-----Original Message-----
From: Ming Fei Jia [mailto:jiamingf@cn.ibm.com]
Sent: Sat 12/20/2008 3:46 PM
To: oic@lists.oasis-open.org
Subject: [oic] Idea about how to create conformance testing documents


Dear TC members,

On the last TC call, I've mentioned this idea. Here state it in the mailing
list.

We know one of TC deliverables is the test corpus documents. I've seen Bart
have proposed the test metadata and also uploaded some example documents to
the wiki, created some document categories, e.g. atomic tests, complex
tests,etc.,and call for volunteers to contribute. But it seems we still do
not know clearly how to start. ODF is a kind of XML based document format,
which contain many many elements and attributes. I think the targets of the
test corpus documents are: (1)to cover all the elements(attributes) defined
in ODF specification,(2)as well as these test corpus documents shall be
opened/saved by different real ODF applications and (3)the behaviors in
different ODF applications shall be the same or a minor difference allowed
by the conformance clause.

To satisfy the above 3 targets, one of methods may be that we create many
many test documents, e.g. each test document for each element. But so many
test documents are difficult to manage and also difficult to test for ODF
applications.

So another method may be that we create test documents according to the
kinds of applications. For example, currently ODF has defined 5
applications(Text document, Presentation, Spreadsheet, Drawing, Database),
so we only create 5 test documents, each of which should be a whole big
test document. The process of creating such test document may be:
(1)Someone, e.g.Bart first create the basic document framework,e.g.
packaging, root document element,meta data etc.
(2)Put such test document into some version management tool, e.g.
Subversion;,
(3)Call for volunteers to contribute to the test document, the contribution
unit could be one or more elements;
(4)TC members review the contribution, the 2 things shall be done during
review, one is to validate the XML schema, the other is to verify the
semantics(open/save/behavior) in at least 2 real ODF applications;,
(5)TC meeting ballots the contribution, if pass(generally will pass since
it is not like specification), the contributor is responsible for checking
in the contribution to the test document via version management tool;
(6)TC chair or secretary marks which elements are contributed and maintain
a statistics table, or maybe version management tool can do such statitics
so that other TC member can search the status before contributing;
(7)Repeat the above 6 steps for many life cycles until all the defined
elements are written to the test document, so a complete test document for
one kind of ODF application is built up.
Finally, our test corpus documents only have 5 test documents which already
be verified by real ODF applications step by step. This process is just
like an iterative product development.

What I can see the advantage of the above method may be:
(1)TC is easy to manage the test corpus documents;
(2)Avoid duplicated work for creating common document elements;
(3)Convenient for ODF applications to verify their conformance.

What I can see the limitations or troubles that the above method may bring
are:
(1)The generated test corpus documents are used to verify ODF application
conformance, could not validate one ODF document, which can be validated by
the existed ODF validators;
(2)It needs that each feature unit(may be composed of one or more elements)
are independent in the whole big test document, that is,when ODF
application is rendering the test document, if some feature unit fails, ODF
application can skip this feature unit and continue to render next feature
unit;
(3)It seems difficult to record the verification result(pass or fail for
each feature unit) automatically if ODF application does not support
that.Maybe provide some meta data in the document. Need TC members help to
work out a solution.

This is only my unmature idea,welcome any comments. One reason to raise it
is that we need to define the test corpus contribution process ASAP since
the time is so fast and closing to the end of 2008:) If the process is not
defined, the actual work is difficult to go forward.

   Best Regards,

   Mingfei Jia(???)
   IBM Lotus Symphony Development
   IBM China Software Development LAB, Beijing
   Tel: 86-10-82452493   Fax: 86-10-82452887
   NOTES:Ming Fei Jia/China/IBM   E-mail: jiamingf@cn.ibm.com
   Address: No.28 Building, Zhong Guan Cun Software Park, No.8 Dong Bei
   Wang West Road, ShangDi, Haidian District, Beijing 100193, P.R.China





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]