OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oiic-formation-discuss] Acid Tests


On Thu, Jun 12, 2008 at 8:16 PM, <robert_weir@us.ibm.com> wrote:


"Dave Pawson" <dave.pawson@gmail.com> wrote on 06/12/2008 01:57:00 PM:


> 2008/6/12  <robert_weir@us.ibm.com>:
> >
>
> > It is not clear to me that a single ODF document can be comprehensive.  In
> > particular, there may be mutually-exclusive options at the document level.
> >  Spreadsheet grid is either on or off, presentation are in autoplay or they
> > are not, etc.
>
> Ditto. Also automated seems ATM quite unlikely with ODF as it is.
>
> I switched to OK on 'document' once I found out is was for a browser test -:(
>
> Even with 25 documents I think parameterization would be a chore.
>

What may be doable, and more in line with the HTML Acid tests, is to forgo any claims that it is a comprehensive test.  Instead, we might think of it like this:

1) TC conducts a study of the state of ODF interoperability today
2) TC identifies the 10, 20, 50 or whatever features that in practice are causing interoperability problems
3) TC creates a "State of ODF Interoperability" report, and accompanies it with an Acid-style test that exercises those 10, 20 or 50 features.
4) The press will pick up on this, but we should let others deal with the name and shame
5) When the state of implementation improves, then loop back to #1

You've pretty much hit the nail on the head here and also touched on another potentially interesting deliverable; a report on the commonly demanded features and their interoperability staus (including areas currently well handled). Such a report would be useful both for the acid test as well as for implementors looking to pick off the 'low hanging fruit' first, thereby further lowering the barrier to entry. As starting points one might like to look at Debian's Popularity Contest and Microsoft's Customer Experience Improvement Program:

This program helps determine which Office 2003 features are used — how often, how much, and by what percentage compared with other Office applications installed on the user's computer. The resulting data helps Microsoft prioritize bug fixes by feature and product and determine whether a feature is used enough to be included in future releases of a product.

Even if this is something a vendor would typically look at, creating a histogram of the features used in a large body of sample documents would be a highly automatable (and potentially rewarding) task.

You'll note that 'comprehensive' was deliberately dropped from the definition in my earlier post, in favour of 'complex document'. I'm still not settled on this point as I have identified two (possibly complementary) approaches:
I imagine that, by way of example, should tables be identified as an interop problem area then a [page of a] document could be created with a complex/nested/formatted table using most/all of the specified attributes, and then a reference screenshot be taken for comparison. As I mentioned before, in the absence of a reference implmentation both generating the ODF code and the reference can be quite challenging, but a wothwhile endeavour nonetheless.

Similarly, documents uploaded to an online test engine could give green/orange/red indication for things like XML validity, ODF conformance, ratio of extensions to native code, delta from sample documents, etc.

I'm not sure which of these two approaches is most interesting/useful in the context of an acid test, or if both are required, but perhaps it is better to specify the requirements such a test must meet without specifying the implementation. In any case an online test engine would be interesting eg for verifying profile conformance.

Sam



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]