OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oiic-formation-discuss] Reference implementation. Layout perfect.


2008/6/13  <robert_weir@us.ibm.com>:
>
> "Dave Pawson" <dave.pawson@gmail.com> wrote on 06/13/2008 02:49:47 AM:
>
>> Rob talked about class 4, Rick mentioned an XML language which was used
>> to specify page layout, David Gerard suggested 'layout perfect' as an
>> pragmatic version of pixel perfect.
>> We could benefit from a reference implementation.
>>
>> Taking this in combination,

<snip/>


>
> Or just do it by manual inspection.  There are ways of making that approach
> tractable.
>
> For example, we could, as part of the test case development, produce a
> picture, or simple text description of each test case's expected result.  So
> something like text saying "Circle with dotted green edge and yellow fill".
>  So test inputs would be an ODF document, and image or text description of
> expected results.  Hundreds of pairs of these.

I think that would be challenged quite easily.
How would you originate the items for comparison (the expected test results).
How would you distribute identical items to anyone wanting them?
You would need to define how the comparison is made,
What accuracy provides a pass etc.

>
> Vendors could then write simple automation to take each of the test suite
> ODF files, load it in their editor and create a screen shot which they could
> save in JPG or PNG format.
>
> Then testing becomes a simple task of comparing the two images or the text
> description of the images with the images.  This is something that large
> vendors compare inexpensively at their off-shore labs.


Please look at these two (what, printouts, screenshots) and judge
if they are identical (for some definition of identical).

I know it wouldn't pass an ISO 9000 quality check.


>
> But even smaller vendors or open source projects can use the above approach
> and adapt it to services like Amazon's "Mechanical Turk" web site
> (http://www.mturk.com/mturk/welcome).  This is a way where simple things
> like this can be farmed out for as little as a penny per test case to
> execute.  If the ODF test suite has 1,000 test cases, then visual
> comparisons for the entire test suite can be done for $10 (plus $5
> commission to Amazon).

How about 16K tests?
I somehow think I'd be suspicious of results garnered that way Rob.

>
> Of course, untrained testers working for a penny a test case may not give
> the most accurate results.  So maybe you run each test 10 times with 10
> different users and then only manually examine test cases where 2 or more
> people said had failed.  In the end, the cost of test execution is $150,
> something very affordable, and doable for not only major product released,
> but even for internal milestone builds.

If your test count is accurate and you can get that service at that price?
My first cut estimate is 20*800 = 16,000 tests for compliance against
ODF testable
statements.
[[We'll see how wrong I am! ]]

>
> If we didn't want to involve Amazon in this, it wouldn't be hard to host
> such a mechanism on another web site.  Maybe there enough interest that it
> could be done by a volunteer effort?
>
> If we can do complete automated testing, then great.  But we shouldn't rule
> out the possibility of testing 1,000's of test cases manually.  It can be
> done.

Yes, agree. A vendor needs the motivation to do it and confidence in
the test outcome.
I think Michael has a point about vendor interest. Curiosity if nothing else.
There's motivation.
Only Oasis can provide the confidence in the test outcome, via our work.


regards






-- 
Dave Pawson
XSLT XSL-FO FAQ.
http://www.dpawson.co.uk


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]