OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

emergency message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: RE: [emergency] Groups - EDIT of emergency-CAPv-1.1


On Fri, 2005-03-18 at 08:59 -0800, Art Botterell wrote:
> One of my concerns regards embedded devices that might not have a 
> convenient way to discover and retrieve the latest tables.  Likewise 

Well that assumes that they don't have the tables to start off with,
which doesn't make sense. They need at least a reference point, and I
can't see anyone implementing this without placing defaults on the
system.

In fact, non-embedded systems are more susceptible to these issues. For
embedded devices you have to over-engineer everything, because you may
not have the chance to fix-it-in-the-field.

I see two problems here. First off, in the single schema model these
devices just won't work period when a change occurs. Secondly, its a
grey line because some embedded devices are capable of firmware update.

Using tables forces the embedded developer to write code that will look
at each field and parse it. Good code principles would dictate that
exceptions (new elements) have to be handled. Therefore the first-off
version release of such a device would be more resilient to any spec
change.

As I have laid out before, completely reprogramming an embedded device's
parser is a hundred times more risky than implementing a mechanism that
updates ancillary tables. But let me be clear this is not my
implementation issue - I've implemented it already. I have no issue with
the risk. I'm sure others will/do though.

> the problem of "reprogramming" the human interpreters of this 
> information; people in the loop may not be as adaptable as computers 
> to sudden or frequent changes in the semantics.  And we're all aware 

Who said things were going to change suddenly or frequently? Its more of
a case of 'in the event of a change which model would work better'.

> In short, again, I'm concerned that it may be possible to make 
> changing the enumerations TOO convenient and thus, in effect, to lose 
> effective control of the standard.  Our current practice of 

Which is why you make the enumerations part of the standard and stick
them in it just as you do with the core schema. To that you can add a
list of must-do's and caveats (be it on your own head for
interoperability if you modify these lists, etc.). This needs to be
firmly put in the document and people need to be discouraged from
modifying it, as they are with modifying the schema currently. In fact,
the best approach is to not even tell people that they have the option
to modify the list.

> considering enumeration changes through the standards process, 
> authorizing them only in the course of standard update cycles, and 
> expressing them within a single normative schema, still strikes me 
> both as adequate for handing any reasonable rate of change, and as a 
> useful protection against unmanaged or excessive changes that could 
> adversely affect the usability and utility of the standard.

Ok so there are two approaches.

1. monolithic core schema. As schema grows, everything is added to said
schema. eventually schema becomes so large that people get lost just
trying to figure it out because they can't see the forest (the core
structure) for the trees (the enumerated lists). I call this the 'save
all your files in C:\' or 'I only need main() with gotos' approach.
2. core schema and ancillary tables. Schema growth does not impact
complexity. Table growth does not impact core schema complexity. Schema
can be implemented at the machine level with simplicity.

Again I am beating the drum.

Cheers
Kon





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]