OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

xcbf message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Subject: Re: [xcbf] X9.84 question - Representation of dates/times

John Larmouth wrote:
> Phil Griffin wrote:
> >
> > John,
> >
> > John Larmouth wrote:
> > >
> > > This is a difficult discussion.
> > >
> > > A simple starting point is to have a digital signature apply to an
> > > encoding (any encoding).  If the encoding is changed, the signature is
> > > invalid.
> > >
> > > The second stage is to say that the digital signature applies to the
> > > abstract values, and that it should NOT be invalidated by encoding
> > > changes (a false negative problem).  (This is broadly where we are at
> > > today with DER, CXER, CPER, etc.)
> >
> > You mean here I think that there is one and
> > only one way to encode the abstract values,
> > so no encoding options are available, and
> > no changes to the original encoding can
> > occur.
> No, I mean that the digital signature is based on a canonical encoding.
> If the signature does not validate the actual bits recieved, then they
> have to be decoded and re-encoded with the canonical encoding rules

This is really an implementation issue in my opinion.
Am I to risk a denial of service attack, or even just
the overhead of decoding, encoding, then decoding again
each bust message that I receive? I think that how to
handle bust messages is best left to the application.

> before you can be sure there has been tampering.  (Invalidity of the
> signature.)

Failure of the signature to validate, by itself, is 
insufficient to assert that there has been tampering.
If an incorrect encoding is signed, encoding the same
abstract values correctly should produce an invalid
signature. This is a benefit of never decoding and 
attempting to reconstruct the original message, but
instead preserving whatever encoding that was 
initially signed and sent.

> This process avoids the false negatives that might occur due to changes
> in the encoding that do not change the abstract values.
> *** This is the only reason we have canonical encoding rules. ***
> > Note though John that even this does not save
> > us from poor implementation that leads to
> > invalid encodings.
> If someone generates an encoding that is invalid, it can never produce
> abstract values, and should never be accepted against a signature.

But in practice on the net this is not what happens. 
It is often a matter of trusting the signer. Folks
try to get along, and to be careful in what they
send and forgiving in what they receive. For example,
the dER vs DER issue only recently resolved.

In some protocols, if the receiver trusts the signer
and the signature on a hex blob is valid, then it does
not matter if the encoding of the blob itself is correct.
But in others, everything must be perfect or the signed
object is treated as invalid.

> > > The third stage - which several e-mails recently have touched on (and
> > > yours has really focussed on) is to say that the signature should be
> > > applied to the semantics transferred, so it should not be invalidated
> > > (false negatives) by a change in the abstract syntax carrying the
> > > semantics (in this case additional or missing relative OID components).
> >
> > The proposal here to eliminate sender trailing zero
> > options has minor benefit. Receivers are still required
> > to assure that all values are reasonable for use. But to
> > either require or to recommend that trailing zeros be
> > omitted is easy to do in an application and it provides
> > direction for implementors.
> That is not the point.  The issue is whether a signature is validating
> the semantics or the encoding.  If you want to avoid false negatives

This can depend on the protocol requirements.

> against the semantics, you need to define a canonical abstract syntax -
> no trailing zero optional elements in your date relative OID, or all
> optional elements present.
> > But it wouldn't hurt to say that senders shall omit these
> > values and receiver shall be prepared to accept them.
> This is nothing to do with the discussion.  We are talking about when a
> signature is to be considered valid.

Not me. I'm talking about application semantics
and whether to bother or not to restrict options.
> > As I stated in this thread, if the receiver preserves the
> > encoding used in the signature process as sent, it does
> > not even matter if the encoding is not canonical or that
> > it is encoded correctly.
> Of course.  That is my stage 1, and what Hoyt advocates.  In this case
> you do not need canonical encoding rules.  Not a position that ASN.1
> advocates would like to take!

Beyond any encoding rules, the protocol or the
application level can specify behaviors such as
deferred decoding on particular components. This
can sometimes eliminate the need for canonical 

> John L

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]

Powered by eList eXpress LLC