OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oiic-formation-discuss] Reference implementation. Layout perfect.



"Dave Pawson" <dave.pawson@gmail.com> wrote on 06/13/2008 02:49:47 AM:

> Rob talked about class 4, Rick mentioned an XML language which was used
> to specify page layout, David Gerard suggested 'layout perfect' as an
> pragmatic version of pixel perfect.
> We could benefit from a reference implementation.
>
> Taking this in combination, consider a transformation from
> some ODF test documents into XML something along the lines of
> <page xsz='11' ysz='9' units='in'>
> <line/> <!-- A blank line-->
> <line fs='12pt'>Para content</
> <graphic x='xpos' y='ypos' units='mm' xsz='12' ysz='24'/>
> </
>
> I think this class of language could provide a reference implementation.
> How it might be used  is another matter. I'm presuming that it is feasible
> to translate a text run into lines given a font size. Hyphenation
> algorithms are available.
> It's almost an xsl-fo implementation, but not quite. Worst case, for
> test documents, this could be hand crafted.
>
> How this might be presented, for comparison with a live page implementation of
> the same ODF document I'm less sure. Is CSS good enough? I don't know.
> It wouldn't take much to hack this into xsl-fo for a PDF realisation.
>
> I could also translate it into annotated text for an audio rendition
> for section 508 equivalence.
>
> An idea.
>

Or just do it by manual inspection.  There are ways of making that approach tractable.

For example, we could, as part of the test case development, produce a picture, or simple text description of each test case's expected result.  So something like text saying "Circle with dotted green edge and yellow fill".  So test inputs would be an ODF document, and image or text description of expected results.  Hundreds of pairs of these.

Vendors could then write simple automation to take each of the test suite ODF files, load it in their editor and create a screen shot which they could save in JPG or PNG format.  

Then testing becomes a simple task of comparing the two images or the text description of the images with the images.  This is something that large vendors compare inexpensively at their off-shore labs.

But even smaller vendors or open source projects can use the above approach and adapt it to services like Amazon's "Mechanical Turk" web site (http://www.mturk.com/mturk/welcome).  This is a way where simple things like this can be farmed out for as little as a penny per test case to execute.  If the ODF test suite has 1,000 test cases, then visual comparisons for the entire test suite can be done for $10 (plus $5 commission to Amazon).  

Of course, untrained testers working for a penny a test case may not give the most accurate results.  So maybe you run each test 10 times with 10 different users and then only manually examine test cases where 2 or more people said had failed.  In the end, the cost of test execution is $150, something very affordable, and doable for not only major product released, but even for internal milestone builds.

If we didn't want to involve Amazon in this, it wouldn't be hard to host such a mechanism on another web site.  Maybe there enough interest that it could be done by a volunteer effort?

If we can do complete automated testing, then great.  But we shouldn't rule out the possibility of testing 1,000's of test cases manually.  It can be done.

-Rob

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]