OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oic message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [oic] Re: Idea about how to create conformance testing documents


Thanks much, Bart,
add my response comments below.

"Hanssens Bart" <Bart.Hanssens@fedict.be> wrote on 02/27/2009 04:19:51 AM:

> Subject:

>
> RE: [oic] Re: Idea about how to create conformance testing documents

>
> Hello Mingfei,
>
>
> Thanks for this great mail !
>
>
> * About creating test documents
>
> Personally, I think we'll end up with both a few fairly "large" documents and
> a whole lot of specific test documents.
>
> Meeting in the middle, like the test scenarios that Dennis and Stephen are
> working on, is a very good way to get things started. Once we start
> testing them,
> we'll find issues and sometimes we might want to isolate this issue
> in order to
> better understand it.
> Meaning: creating a small, specific test document for it.
>
> After doing that, or if someone has created separate, small test documents for
> some other feature(s), we could try to combine 2 or 3 documents in 1larger one
> and retest it. This retesting is important, because combining documents (well,
> actually, features) may expose issues in unexpected ways.
>
> For example:
> - the last digit of a page numbers may get truncated after 100 pages
> - a colored table might go wrong on a page with an image as background
> - some apparently unrelated features may crash an application when combined
> (I am not making this up, the vendor correctly identified and fixed this by
> the way, but my point is that we *might* encounter strange behavior)
>
>
> Sometimes we have no choice but to create a small test file.
>
> For instance: a document cannot be encrypted and not encrypted at
> the same time,
> so we have to create at small test document (well: at least 3, .odt,
> .odp, .ods)
> for that.

I agree that first we create small test document for some atomic feature, second create complex document that combines different small test documents. I understand the purpose of the first step is to make sure our test suite can cover the entire ODF specification as well as try to avoid overlap between different feature units, the purpose of the second step is to test complex application scenarios. And now we should start to do the first step.
>
>
>
> * Regarding "feature unit"
>
> The easiest way would be to take a look at the spec and look for numbered
> parts. For example: 17.4 MIME Type stream
>
> Although I'm not sure if this will always lead to the "unit"
> For example: is 17.7.3 File Entry a unit ?
> Or should we consider the paragraphs like "Full Path" as a unit ?

Yes, I also think the feature unit should not be too absolute. We can select heading 2 level in the spec as a feature unit, we also can select heading 3 level in the spec as a feature unit, even we can select each element as a feature unit. I think the purpose of defining feature unit or give some restriction to the feature unit is that our test suite can cover the entire ODF spec without much feature overlap so that the users of these test suite can identify the exact weakness points of their ODF applications through interoperability testing since in a mixed feature scenario, often we can not say "this is the table issue, or this is the background image issue" only through interoperability testing, maybe that has to cause people to dive into the Application implementations, which of course is not the scope of OIC TC.

And since in the above, we agree to do the things by 2 steps, first create small test documents, second create complex scenario documents, so now how about just define the feature unit as one element? ODF 1.1 has about 540 elements, and ODF1.2 has about 600 elements. Suppose we need at least 3 test documents for each element corresponding to 3 kinds of MIME types(text,spreadsheet,presentation), the total test documents will be about 1620  in 1.1 or 1800 in 1.2. Also if this is the case, I would like each test case should contain at least 3 test documents for the element feature, so the total test cases will be still about 540 in 1.1 or 600 in 1.2. Although I think the number of test documents is still a little big, I do not know if there is other better method to define feature unit in order to both meet the purpose of feature unit and to avoid some uneccessary confusing. Here the confusing, I mean we should try to avoid specifying the feature unit according to the specific personal judgement, we need a definite rule to do that.
>
>
>
> * About the process
>
> I like your draft proposal, although I'm not 100% sure about 4.3
>
> "The test document shall pass the semantic verification. That is the behavior
> are consistent when the test document is opened and rendered by different
> existed ODF applications. Once having confliction, TC vote to make a decision
> which behavior is correct by majority voting numbers."
>
> Maybe it's included in the "semantic verification", but the ODF spec itself
> must be used.
>
> For example: if the spec says that a thumbnail must be 128x128 pixels PNG, but
> the majority of the OIC would decide JPEG is just fine, it's still not correct
> according to the spec. Perhaps the spec itself must be changed, but it would
> be unfair to decide that not following this version of the spec is OK.
>
> On the other hand, if the spec is NOT clear about it, the OIC could test and
> see what is common practice and indeed vote for having things included it in
> a profile or a "best practice" guide, or as an improvement for ODF-Next

Good comments, I totally agree. For the semantic verification, first ODF spec, second the common practice, and if exceptionally common practice > ODF spec, then transfer to ODF TC:) I'll add some words in the process on the wiki to reflect this standpoint. Thanks.
>
>
>
> * Meta data
> I don't completely understand your question here:
>
> "A question here, I know test meta data is important, but I am not clear now
> where we test metat data is appropriate to use for test document creation.
> I understand test meta data should be used for interoperability testing to
> record test case, right?"
>
> Do you mean "how could metadata help us for interoperability testing" ?
>
> We could indeed use some kind of metadata (xml) to record the results of the
> test cases in various implementations (detailing what part of the spec is to
> be tested, and what the result is). Non-members could also send in these test
> results as an XML attachment in a mail to the oic-comment list.
>
> Members could also do that using JIRA, so perhaps that's is a more practical
> approach:
>
> - providing the test documents + an overview what is to be tested (as an HTML
> page) + a very simple text template.
> - members and non-members could use this very simple text template to send in
> results
> - volunteers from the OIC could verify and record the results in JIRA

Pls ignore my previous ignarance question, now maybe I can understand more about the purpose of test metadata. Could I understand here the test case = test documents + test metadata? If anyone would like to contribute test case, he need to create test documents and test metadata by himself, and submit to TC. On the other hand, do the TC plan to use this test metadata to describe and present the test case stored in TC repository? If that is, how to present the metadata in TC web? Maybe redesign a web form to let contributor input, or just use OASIS document repository?(The latter seems can not meet the design of metadata template). The metadata is a XML template, but generally users will not manipulate the XML directly, so maybe we have to provide some UI entry to input test metadata, as well as according to the metadata sent by contributor, we can present the metadata in TC web automatically so that users can access our test suite and understand the test scenario, the test results, etc.
>
>
>
> Best regards,
>
> Bart



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]