OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [Re: [oiic-formation-discuss] Reference implementation. Layout perfect.]



>
> ----- Message from Shawn <sgrover@open2space.com> on Fri, 13 Jun

>
> If we can define the rules that say a document/spreadsheet/presentation
> either passes or fails a specific test, then we *should* be able to
> write code to implement that rule.  HOW we do that is well, irrelevant I
> think.  I don't think this TC is here to build the tools, but only to
> define them.  (building some samples would be handy though).  Of course,
> as the convener/director of the TC you are free to correct me here. :)
>


It is a line we'll need OASIS staff to guide us on.  Note that with today's technology, you can do a good bit of tool development purely within XML, with XSLT, Schematron, XProc, etc.  I don't think anyone would judge us harshly if the TC's documents came with such XML, either as an annex, or as an download supplement, much as we make the ODF schema RNG files available for download.

However, if the effort ends up being 20K lines of Java code, coupled with a user interface, then I think the would be a problem.  In particular, the IPR status of such an output is murky in my eyes.  

So I'd try to make it so the formal definitional part happens in OASIS, but the bulk of code occurs elsewhere.  If we want to we could define a test execution markup, very simple to declare the tests that should be run and the order, and the scoring, etc.  Or do something with ant and XSLT.  Something to define a clean interface between how we define the tests, and the possibly platform and application specific code that actually executes the tests.

> If we can boil the testing routine down to code (or a set of rules that
> can be coded), then we accomplish a number of things:
> - indiscriminate and fair testing.  i.e. no favortism
> - removal of administrative overhead
> - easy testing.  volunteers/workers don't need to be organized, results
> collated, etc.
> - Efficient testing. volunteers/workers don't need to be organized,
> results collated, etc.  Resulting in a much more frequent
> testing/revisions (hopefully).
>
> An ODF document is ultimately a bunch of data organized according to the
> rules of the ODF standard.  If we cannot properly define the rules that
> say if that document conforms to the standard well, or is done in a way
> so that it interoperates with other applications well, then we cannot
> write any code.  Which means that all testing is manual and subjective.
>   Which means the testing is more or less meaningless.  Subjective means
> a moving target.

I still think human testing is reasonable.  Remember, interoperability with office documents, is often about human perceptions.  This looks wrong.  The document changed.  This isn't the same.  The users who are dissatisfied with fidelity problems do not seem to have any qualms about complaining even though they have not formally defined "fidelity".  

-Rob

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]