OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oiic-formation-discuss] Acid Tests (was: Re: [oiic-formation-discuss] Interoperability versus Conformity)


On Thu, Jun 12, 2008 at 7:45 AM,  <robert_weir@us.ibm.com> wrote:
>
> "Sam Johnston" <samj@samj.net> wrote on 06/11/2008 10:10:08 AM:
>
>>
>> Indeed an acid test in the form of a (multi-page?) document with
>> progressively complex/esoteric directives could be a useful device
>> for 'naming and shaming' poor implementations, as has proven very
>> effective for W3C standards. It will be interesting to see if there
>> is a place for something like this alongside a more complete test suite
>> , or indeed if there is even a need for both (presumably the former
>> will enjoy more eyeballs which is arguably a good thing, if it can
>> be brought up to the task).
>>
>> This should be a lot easier for spreadsheets (at least formulas)
>> where a green/orange/red matrix could be set up, potentially with
>> each field dependent on the last based on implementation priority. Acid3
>> appears to do something like this (eg I just got 71/100 on FF3rc2).
>>
>
> Can we drill into this idea a bit more?  I'm sure we've all see or heard of
> the browser Acid tests.  But what is it exactly, in conformance terms?
>
> Specifically:
>
> 1) Is it a complete conformance assessment of the tested standards
> (CSS2/XHTML)?
>
> 2) Or is it more of a "challenge piece" exercising the 10 or so features
> that current implementations are missing or getting wrong, with the idea of
> drawing attention to those features in hopes of moving implementations
> forward?
>
> 3) Does it focus on features that no one gets right (initially)?  Or does it
> start with those that some get right and some don't?  (It would seem that
> the greatest practical pain would be around features that some implement but
> others don't.  It seems to me that features that no one implements cannot be
> the cause of interoperability problems.)

Ok all 3 are partly correct.   Goal should be to have a complete
conformance assessment but with targeted focus allowed.

Challenge piece by piece get active testing out there sooner.
Focusing on the defects also speeds up fixing of defects.  2 and 3 are
optimizations that should be applied.  Also its missing detecting
invalid produced documents.  If a program says it produces 1.2 ODF and
it shoves in its own extensions when it says to be exporting valid
ODF.

> 4) What is the relationship between Acid and other forms of conformance
> testing?  For example, the W3C has a CSS2 test suite
> (http://www.w3.org/Style/CSS/Test/).  Why is Acid so much better known?  Is
> this because of technical reasons?  Or the immediacy of the presentation (a
> smile face when good, road kill when bad)?

Its a simple technical reason.   W3C CSS2 test suite is harder to use
compared Acid test.   Anyone with even basic skills can point there
browser to the Acid Test and see result.  More experience is need to
use CSS2 test suite and the like.   Note the W3C test suites are older
than the Acid Test.   Acid Test make problems more displayed.

> 5) What are the essential things that we need to bring along into a TC
> deliverable to get the benefits we want?  How do we define this in the
> charter?  I think saying "ODF Acid Test" might mean different things to
> different people.

I know not doing "name and shame"  is a high goal.  Unfortunately
every one is human.   No risk of harm temptation to breach increases.

Simpler test is to use the better.  The more it can be publicly used
the better.  The is the one thing the Acid test has taught us.  Fear
of being caught doing the wrong thing is key to preventing a standard
from being made useless.

Producer of the Acid Test should never take part in Naming and
Shaming.  Just provide the means if it becomes required.  Of course
there will be very little point trying to name and shame if everyone
is following standard and provides updates to any found glitches.
Person will appear judgmental and stuff there own credibility.
Required is only of companies get the idea they can break standard
cause problems and get away with it.

For evaluation of what software is going to be used by a company to
create ODF documents simple testing of the application is also needed.
  Number of steps to get answer is critical.   Acid Test had two basic
ones run the test compare to reference image if not match application
not working correctly to standard.

Developers of course would love more information.    This is the
issue.  We need part Acid Test part W3C style conformance tests.   If
possible as one Test.  Worst case two test.   It is hard to reconcile
what developers and end users need in a test.  This is the biggest
issue why a pure developer create test cases fails.  Reason end users
don't understand or it will take too much of there time to get the
information they need ie is this to standard or is it not and what
application is closer to standard.   Why its not to standard is not
normally a End User Issue the Developer of course wants to know that.
 Acid Test 2 provides more detailed number of tests passed.

It is key all sides of this problem get covered from where the tests
will need to be used to be effective.

Peter Dolding


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]