[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Re: [oiic-formation-discuss] Acid Tests
--- robert_weir@us.ibm.com wrote: > "Dave Pawson" <dave.pawson@gmail.com> wrote on > 06/12/2008 01:57:00 PM: > > > 2008/6/12 <robert_weir@us.ibm.com>: > > > > > > > > It is not clear to me that a single ODF document > can be comprehensive. > In > > > particular, there may be mutually-exclusive > options at the document > level. > > > Spreadsheet grid is either on or off, > presentation are in autoplay or > they > > > are not, etc. > > > > Ditto. Also automated seems ATM quite unlikely > with ODF as it is. > > > > I switched to OK on 'document' once I found out is > was for a browser > test -:( > > > > Even with 25 documents I think parameterization > would be a chore. > > > > What may be doable, and more in line with the HTML > Acid tests, is to forgo > any claims that it is a comprehensive test. > Instead, we might think of it > like this: > > 1) TC conducts a study of the state of ODF > interoperability today > 2) TC identifies the 10, 20, 50 or whatever features > that in practice are > causing interoperability problems > 3) TC creates a "State of ODF Interoperability" > report, and accompanies it > with an Acid-style test that exercises those 10, 20 > or 50 features. > 4) The press will pick up on this, but we should let > others deal with the > name and shame > 5) When the state of implementation improves, then > loop back to #1 > I agree with making this a continual effort. Concerning point 4... >> 4) The press will pick up on this, but we should let others deal with the name and shame We (not sure who "we" is) should at all times be clear to the public about the significance of what is getting accomplished with the work and results leading up to the report. The process should be formally recognized somewhere tightly associated with ODF and its conformance clause. As for the other four points, I would like to find a way to tap into the "community" to identify, design, build, maintain, etc, acid and other types of tests, on an ongoing basis. This is valuable, practical, desired (but I haven't formally polled), and quiets claims that too elaborate or extensive a test would consume too many resources. In fact, forget about the complaints, the fact is that the more people helping, the more coverage is done, which has the side-effect of *countering* efforts by vendors to *code to the test suites*. Potentially tapping into the public brings up a host of questions, which I will kindly let others ask. To return to your point 4, I suppose third parties or the users/buyers themselves (or someone on their behalf) would set requirements like " 'officially' failed tests must be passed within 60 days or else X." I think though that we should dive into this area to an extent and not leave it totally in the hands of third parties. We are in a unique position to give guidance on the significance of one test failed vs. another vs. many tests in series, independently, etc. For example, what happens when before 60 days, the failure coming due is fixed but there is a regression of some sort? We can serve end users well by giving guidance (recipe/algorithm, table, framework...) since we can/will build a formal testing framework and can/will keep it categorized/organized (as mentioned, potentially tapping into the public for aid). Alright, "we" might not be the right body to handle this. The point is that those involved most closely with organizing, designing, etc, tests should offer these guidances, resources, and official interpretations to the public. These situations discussed above and others related to (acid or other) testing should be tied to "conformance" officially, clearly, and precisely. Yes? No?
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]