OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

office message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]

Subject: RE: [office] Defect severity

1. I have been mulling on these and there are a couple of cases that I don't
know how to qualify.  

   1.1 Also, the dependence on subjective assessment seems to be a problem
for some of these.

   1.2 In addition, in practical situations, it seems to me the problem is
not about fixing the standard quickly but getting implementations that may
exhibit the problem to come up with important fixes by some sort of
pre-standard agreement that can be rolled into the standard as needed.
Putting together a coalition around a remedy, for something with
criticality, seems far more likely and relevant than cranking on the

2. (3) is easy.  (1) is pretty easy too, since there is likely to be a
pretty good objective demonstration of the defect.  Correcting or clarifying
the specification will matter for (1), but agreeing on the problem and its
repair for documents and implementations "in the wild" will have more

3. (0) is difficult. 
   3.1 I am aware at least two security exposures around the use of
encryption and digest algorithms, but I would not put them in (0).  They
demonstrate the absence of a worked-through threat model more than
representing an immediate public-safety matter.  
   3.2 In some cases, lack of accessibility provisions comes under civil
rights protections and also statutory requirements in certain domains of
use.  I still bet these are in the (2-3) level, in some sense, depending on
any claims of scope, and to the extent that it is essential that provisions
be made in the format itself.
   3.3 Actually, the definition, for packages, of the *required*
manifest:checksum element is a security disaster, since it basically allows
the file to be decrypted without knowing the passphrase from which the
digest is computed.  (See 5.5, below.  Even when the key generation
algorithm is chosen to use a different digest method, presence of the SHA1
digest considerably simplifies plaintext attack on the passphrase in order
to crack the document.)  It intrigues me even more that this schema-required
attribute does not appear in the sample manifest and, I trust, not in any
actually-encrypted files.  That the provision has still not been removed
says something, but I am unsure what.  
   3.4 [STOP THE PRESSES: I was so curious about manifest:checksum that I
used the OO.o save-with-password option on the 1.2 Package draft itself.
Surprise, surprise, not only are manifest:checksum attributes present, their
values are different for each encrypted item, even though I only entered one
pass phrase and the same digest algorithm is identified each time.  This is
clearly not reproducible from any information provided in the specification
and anything understood about the "intent."] 

4. For (4)-(5) there is not much to quarrel with, although I think where
there is an issue around ambiguity and understandability (especially in the
face of international translations), so we need to be careful about "trivial
for whom?" 
   4.1  I also think we should not collapse skilled-in-the-art and
skilled-in-language and also skilled-in-understanding-specifications.  
   4.2  It's my experience that those who are skilled-in-the-art of
specification with regard to a particular subject matter have a heightened
sensitivity to ambiguity and under-specification as well as sharpened
recognition of careless use of their specialty nomenclature and concepts.
If we are not those experts, we should not be saying "oh, the ones skilled
in the art will understand this precisely."
   4.3 Furthermore, a specification for one audience that depends on skills
of an allied art needs to make that dependency very clear so that (1) those
not so specialized are led to the required resources (2) those not so
specialized are led to recognize that they require specialized knowledge,
and (3) those who are so specialized recognize that the dependency is

5. (2) is the most problematic for me.  The difficulty I see is over how one
ascertains intention on the parts of the standard writers and whether tacit
understanding is a reliable quality.  
   5.1 I suggest that any time it is necessary to inspect implementations,
and the standard does not say implementation-determined, that should be an
automatic (2).  
   5.2 Also, when a comment claims that the intention is not apparent, I
think we should accept that statement on face value, at least provisionally,
until there is enough deeper analysis to understand what the disconnect is.
There is probably some remedy required, even if it is not the one the
commenter suggests. 
   5.3 For (2), the lack of a measurable quality is particularly
troublesome.  I don't think "likelihood of misapplication" is any help here.
What does it mean to "misapply" a standard?  Is that the appropriate
   5.4 Here's an example.  There are places where the specification provides
that digest algorithms to be applied in the handling of pass phrases that
are used for various document-protection purposes.  The referenced
specifications for those digest algorithms (to the extent references are
provided) simply describe the input as a string of bits in storage.
Clearly, for the algorithm to work in a given application, there needs to be
an agreement on how that string of bits is coded before the digest algorithm
is applied, lest the result not be reproducible.  Nowhere in the ODF
specifications is this missing piece of agreement specified.  (I suspect the
same is true for many other applications that appeal to the digest algorithm
specifications.)  In an international setting, this is a particularly
troubling situation, since there is a very big difference even when assuming
single-character octets (e.g., 7-bit ASCII versus ISO 8859-1) or
multi-byte/double-byte characters (coded in SHIFT-JIS versus UTF-8 or UTF-16
or UTF-32).  There are, for complex languages, normalization rules that must
be applied as well to ensure that the "same" composed/combined characters
lead to the same encoding before the digest algorithm is applied.  Finally,
there are white-space rules and other rules to be considered in order to
have the digest be reproducible among different products working with the
same document, with different default language settings, platform defaults,
etc., etc.  I'd love to see how the intent is to be understood here.   
   5.5 Here's another on the same pet topic. The ODF-specific encryption
technique for package items applies a key derivation and encryption process
to each encrypted item in the package, and the manifest entry for each such
item contains the necessary parameters by which the key derivation can be
repeated and decryption accomplished.  Nowhere is it required that all of
these encryptions start with the same pass phrase.  In fact, it is
conceivable that a different pass phrase is used for each of the
encryptions.  It is highly unlikely that is the case in any current
implementations, since it makes no sense to suppose that an user who
requests encryption and supplies a pass phrase (of the kind wondered about
in 5.4, above) has any idea which parts of the ODF package will be encrypted
as a result, and an user who knows or has been conveyed that pass phrase
certainly doesn't know which parts of the package have been encrypted.  I
would claim, in this case, after having invested considerable study to this
part of the specification and the sample manifest (the only place to learn
certain things), but without even bothering to try it with OO.o, that there
is a single pass phrase and it (appropriately-digested) is used as part of
the key-generation procedure for each encrypted item in the package.  You
could convince me that this single pass-phrase, symmetric key-derivation
procedure is "the way the standard is intended."  I'm not sure what the
value of leaving this underspecified is other than to make developers work
harder to figure out what only makes sense, after verifying that with an
implementation or two.  [Addendum: I have checked this with OO.o and have
found a baffling aspect to it, noted in 3.4, above.]

 - Dennis

-----Original Message-----
From: robert_weir@us.ibm.com [mailto:robert_weir@us.ibm.com] 
Sent: Thursday, March 12, 2009 10:01
To: office@lists.oasis-open.org
Subject: [office] Defect severity

It is probably worth getting to a common understanding of defects and how 
much we're going to commit to resolving for ODF 1.2.  OASIS doesn't 
mandate any particular taxonomy for defects, and ISO/IEC has only a 
technical/editorial two-bucket classification system which is a bit too 
course for our needs.

I'll toss out this classification scheme for defects, a scale of 6 
severity levels:

0. Defects which could cause the standard to be used in ways which are 
unsafe to persons or the environment, or which violate civil or human 
rights.  For example, defects related to privacy, safety tolerance, 
encryption, etc. 

1. Defects whose existence would likely cause direct business or financial 
loss to users of the standard.  For example, spreadsheet financial 
functions which are defined incorrectly,

2. Defects which prevent the standard from being used in the way which it 
was intended.  This severity level requires a likelihood of misapplication 
of the standard, not merely a remote potential for misapplication. 

3. Defects which violate drafting requirements from OASIS or ISO/IEC.  For 
example, lack of a scope statement or misuse of control vocabulary.

4. De minimis defects , i.e., trivial defects, hardly worth fixing. 
Obviously, even the smallest defect related to health and safety must be 
given considerable regard.  However, a typographical error where the 
meaning is otherwise clear from the context may be considered trivial. 
Similarly, a grammatical error which does not change the meaning of a 
clause or a terminology question where the meaning is clear and 
unambiguous to a person having ordinary skill in the art to which the 
clause most closely pertains, i.e., 2D vector graphics.

5. Matters of personal style.  For example, a request to use "do not" 
rather than the contraction "don't".  These are opinions, but not defects.

Obviously the above are not rigid mathematical definitions.  Most are 
judgement calls.  We are fortunate to have so many ODF implementors on the 
TC to help us accurately evaluate defects and set appropriate severity 

If something like the above meets with the TC's approval, then we could 
elaborate on the definitions and agree to apply them in our decision 

For example, we could say that defects reported on published ODF standards 
would be processed as follows:

Severity 0-1 would trigger the TC to immediately prepare an errata 
Severity 2 would be resolved in the next-scheduled errata document.
Severity 3-4 will be resolved in the next technical revision of the ODF 
standard, i.e., pushed into the next version.

We could also agree to treat this severity levels differently depending on 
where we are in the drafting process for ODF 1.2.  For example, once we 
send out for public review, we might commit to fix all defects of severity 
0-3, but not 4 & 5.

Let me know if this is is a useful model for thinking of defects.


To unsubscribe from this mail list, you must leave the OASIS TC that
generates this mail.  Follow this link to all your TCs in OASIS at:

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]