Subject: Re: [oiic-formation-discuss] Deliverable: odf-diff?
--- On Mon, 6/23/08, Radoslav Dejanović <email@example.com> wrote: > From: Radoslav Dejanović <firstname.lastname@example.org> > Subject: Re: [oiic-formation-discuss] Deliverable: odf-diff? > To: email@example.com > Date: Monday, June 23, 2008, 4:38 PM > firstname.lastname@example.org wrote: > >> In that case, test results would be: > >> > >> a) not implemented - the application does not > implement that clause > >> > >> b) pass/fail - the application does implement that > clause, but it does > >> or does not fully follow that clause > specification. > >> > > > > There are really multiple levels here. We must keep > them straight. > > > > First there is the feature level. Some features are > optional, some are > > mandatory. Every ODF document must be valid to the > ODF schema. This is > > a mandatory requirement. But support for spreadsheet > formulas is > > optional. For example, if you are writing a word > processor, then > > spreadsheet formulas would not be implemented. > Similarly, a very simple > > spreadsheet might not have charts. > > > > Then there is conformance at the level of a feature. > If you implement a > > particular feature, such as the Zip packaging model, > then some things > > are required and some things are optional. > > That's correct, but what I wanted to say is that above > might be the rule > for doing any conformance test with any clause, mandatory > or not. > > For example: > > MyOwnTextProcessor tests: > > - schema_validity: pass > - hyperlink_metadata: pass > - text_functions: pass > - formula: not implemented > - embed_multimedia_mpeg2: fail > > In that case, first is a must for the application to be > considered ODF > conformant, other two are important features for a text > processor that > might not be included in the schema, and they are > supported; formula > support is not implemented, and mpeg2 container is > implemented, but > doesn't pass the standards test (for whatever reason). > > That same set of conformance tests will produce different > results on > different products; the point is that tests are uniform, > and that the > report gives us a clear picture of things that work, that > do not work, > and things that are not implemented. > > Or am I missing something? Yes, and I want to add more examples to the description. In general, we might have many profiles within the std. We might also have third party accreditation orgs determine what is or isn't important. They can create profiles they wish to certify. Even a single profile might have a checklist of features implemented or not (eg, a list of the shoulds). Rob also mentioned that testers and other third parties would come up with the equivalent of "shoulds" or best practices that would not (yet perhaps) be enshrined in the standard [providing "notes"]. They would want to build tests with their own interpretations of P/F/Other. Now, imagine a comprehensive test suite that includes an overview report that produces stats on what passes/fails/other. Through some number of tests (some simple, some complex, some opposite tests), it would list out the results in a form that could then be processed to state what profiles were achieved, what features are supported, what equivalencies/canonical forms are supported, etc. It can even have results per line item that are much more complex than binary P/F. There are many flavors of profiles potentially. Some will be standardized and others will not. So when dealing with test specs, recommendations, etc, the proposed TC, as well as the ODF TC, should keep in mind the many types of audiences and ways many different ways that the items of the spec would be tested by others for different purposes. Flexibility would be a key word here, in other words.