OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: My perspective


FIrst, an apology, I may cover ground that has already been covered on this 
list. I have been going through the archives and I'll try to avoid a 
duplication of topics already covered.  At the same time I realize that (like 
me) people may have arrived late.

This email turns out to be pretty long, I go over a lot of ground and 
introduce various new concepts.  Don't feel bad if you don't get them all, I 
may refer back to some paragraphs I wrote here in later conversations due to 
that ;)


An introduction;

My name is Thomas Zander, I have been working on / with ODF since its 
inception, you will find mails from me in the initial discussions between 
KOffice and OpenOffice on getting the fileformats more similar (we started 
with a tar based format in KOffice, we changed to zip as the first step ;)
I am working on KOffice in my free time and am employed by Nokia where I work 
on Qt Software which has a big overlap with KOffice functionality.
I have in the past been employed to work on the ODF-Testsuite and I have been 
a member of the ODF-TC for several years. My background and interrest is not 
so much in standardisation, its more in typography and making software with a 
great user experience.

To give my perspective on how to do conformity checking, I want to give my 
view on what ODF is.  This may sound strange to you as we all know what ODF 
is, right? But actually I realized that a lot of people have only a very 
different and often limited set of usecases in mind when they think about ODF.
ODF has been created for an office suite, applications that show you a text 
document or a spreadsheet and that want to save that.  This is a valid 
usecase, but a simple one.
More exiting usecases are things like:
* website generates a ods file for download. So if you have a website that 
gives you access to all your (or your companies) contacts you can download a 
selection and use that in your spreadsheet or in your word processor for 
mailmerge.

* ODF combines things like svg and mathml, which means it can be used as a 
file format for clipart. Meaning can be that your vector graphics get stored 
in ODF, but also a text-snippet or just a logo. I'll let you come up with 
usecases yourself, but there are plenty ;)

* Currently the format of choice when doing rich text is html. So, if I 
copy/paste or I email, html is created. This is sub-optimal. Html is a broken 
format on many levels and has various security problems as well. Much better 
would be to use ODF as a clipboard or email format. Usecases range from 
having the ability to copy paste all text you have on your desktop (all text 
entry fields) as odf so simple annotations but also things like bold will 
survive copy paste. Sending emails as ODF xml streams is something I think 
will happen in under 5 years.

This short list of different fields of use show that there are a lot of things 
to consider when looking into the issue of interoperability. For instance, do 
we require copying text from a full-blown word processor and pasting that 
into a simple text field to preserve bold/italic data when at a later point I 
copy that same text again and paste it into my rich-text editor in my email 
application.  The textfield would not be able to show this data, so copying 
it out later again seems a bit odd.

So, the point I'm trying to make here is that if we want to have ODF working 
across a large range of usecases having a simple metric of rendering or of 
preserving doesn't make much sense. It would likely just hamper uptake since 
the most exciting usecases would not be able to claim ODF compliance.

What do I think we want to test? There are some lessons I learned from working 
on the ODF testsuite;
* I want to have a way to test each feature in the ODF specification. To the 
level of each element and each value that you can give.
Now, what specifies a passing test should be separated into several points, 
since applications typically can pass one and fail another.  I suggest these 
points to be something along the lines of
  a) Loading the test data and displaying it on screen (correctly ;)
  b) Saving the loaded document out again and not loosing information.
  c) Having GUI to alter the ODF feature to all the supported values.

As an example; an implementation could be able to have a list-item type 
of "Arrow" but when it loads it it silently converts the list type to a 
unicode character and on saving the list type is no longer arrow, but 
character.  So the first test would pass, but the second would not.

This brings up an important problem; when an implementation does not support a 
certain feature at all, does that mean loading a document and saving it out 
again will loose those features?  I think this is an important part of our 
interoperability question.
To answer this question we have to make an important distinction between known 
ODF features and unknown ODF features.  A known feature is something that is 
detailed in the specification, but this implementation does not support.  For 
instance because its text rendering engine is not powerful enough.
Completely separate from this is unknown metadata or plain foreign tags.  For 
example an ODf implementation may add some new feature that is not (yet) 
supported by ODF and it saves it in its own namespace. This new feature is 
not possible to support for most other applications, but it may save the tag 
out again.

So, if anyone asks if an ODF application has round-trip preservation of 
properties I want the first counter question to be if this is about known or 
foreign properties ;)
Each of those two categories should have the 3 questions (a, b & c) answered.
This matrix of checkboxes totally makes up for feature-support.

For interoperability the above will get you a long way, but there are lots of 
implementation details that may not be covered by the feature matrix.  One 
good example is the basic of linebreaking.  See 
http://www.kdedevelopers.org/node/2262 for some research I did on this topic 
in the past (sorry, image links broken)
The correct typographical (in case of text) or otherwise correct displaying of 
a certain concept warrents a separate set of tests.


Conformance testing, why ? And how?

Up until now I have talked mostly about the concept of testing and what to 
test. I simply skipped over the 'why' question.  Which may be something 
people have not come up with a good answer to yet.
The simple answer is that testing means nothing more then the process towards 
creating a good implementation.
The more complex answer is because it is good for interoperability; it creates 
something to aim for and it means the more experienced people get to point 
out common pitfalls to the newcomers. But end users can also find out what 
support another implementation has and that in and off itself means we enable 
market forces.  The best implementation will get new users faster.

With several answers to the 'why' this gives us some insight into the format. 
I heard people say we need profiles.  I personally think that profiles sounds 
wrong; what I think is really important is feature-groups.  Does 
implementation X support lists or tables very well.  If not, I won't check it 
out.  If I need to write my formula, I go for the app that supports 80% of 
the spec instead of the one with 20% support. Etc.

After having gone over what to test, why to test and how to present the 
findings there is the silly question of how to do it.  I think this is more 
something for future mails on the actually created TC, but I can at least 
point out my experience.
The easiest way is to create a set of little documents, but I am not 100% 
convinced it works (without modifications). The reason it didn't work is 
because we ended up with different people interpreting the tests on screen 
differently.  For example loading a doc which turned on a feature didn't load 
correctly in implementation A, one author came and said the test should be 
marked as passed because all the user had to do is go to the menu and 
manually turn on that feature. Naturally I disagreed, it was the loading I 
tested, and that didn't work.

Another approach I'm working on now seems to work pretty well, but is not 
really easy to test.  I explain it here; 
http://labs.trolltech.com/blogs/2008/06/11/testing-typography/
Basically, you need a common set of documents, as before, but you additionally 
need a set of automated tests to judge the outcome.  It has to be automated 
so it can be run every week (or day!) instead of once and then never again.
The biggest problem with the this is that for each implementation you need 
money to implement it.  And someone that knows what he is talking about to 
actually approve the tests.

So, bottom line, there is no silver bullet and any progress in this area is 
welcome. I've been working for some two years on making sure the KOffice2 
implementation will be the best ODF implementation there is, and I do realize 
that conformance testing is an essential part of this process.

Thanks for reading till the end, flames and thank-you notes welcome :)
-- 
Thomas Zander

This is a digitally signed message part.



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]