I'll
resort to general principles in a moment, but first let me argue from a
particular case.
1. I
keep trying to bias the argument so that we say that we need to be able to
support independent pools of values for newly-introduced attributes. Alas,
having written the next paragraph, I don't see a necessity for splitting the
value set. Each value is not tied to any particular attribute until it is used
as a value for that attribute. So it is the attribute-value pair that has
significance, and the semantics of that pairing is what we should be trying to
preserve.
Specifically, we should add to our user input the
offline input of Deborah Pickett of Moldflow, who asks for multiple "independent
axes" of conditionalization. In her example, the operating system of the client
and the operating system of the license server may be independent. Her example
may not be sufficient to require us to split the value pool, provided
that the mechanism we adopt is able to simultaneously specify two
desired matches: a value an attribute "os-client" and a match of a value in
a second attribute "os-license-server". The fear I had was that during
generalization, the separate attributes would collide, but ... (a) in the XSLT
discussion, we seem to be proposing a syntax that will collapse the
separate values into a single generalized attribute while retaining enough
information about their origin to distinguish them and (b) because the separate
attributes are being collapsed into a single attribute, there is no risk of a
collision of attributes in the generalized XML as a result of using the two
specialized attributes simultaneously.
2.
Returning to the general principles, Erik Hennum has now published a more
extensive explanation that takes an integrated point of view of both element and
attribute specialization. This deserves a separate response.
3.
Looking back at the discussion between Dana Spradley and Michael Priestly, there
seem to be a few main causes for the divergence in their points of
view.
3a.
The most fundamental difference, in my view, is between a denotational and an
operational view of semantics. Michael's success criterion, in the end, has
been primarily operational. His court of last resort is the evaluator of
conditional attribute settings. If the evaluator can use the data to do what we
judge to be the right thing, then the language design is adequate. Some of
Dana's requests have been more denotational. How can we tell that the set of
language features that we are proposing is necessary and sufficient? Do we have
an agreed upon model consisting of objects (elements and attributes) and their
relationships (specialization of various kinds) that allows us to interpret each
expression means (denotation), and do we agree on what the method of
interpretation should be?
These
approaches have some junction points and common language. The evaluator is an
implementation of an interpretation. But the denotational meaning of
"interpretation" includes the entire mapping, whereas the evaluator focuses
primarily on making sure we know what to do with values of attributes. The
various notions of specialization also constitute common language, since we can
agree on what we mean by each of those notions.
Our
closest approach so far to the development of a model theory for the joint
problem of specializing elements and attributes is in the line of discussion
that Erik Hennum provides. That discussion contains rules that are derived from
a more or less explicit model (need to check the Extreme paper). In language
design, model theory is somewhat of a gold standard. If your language is based
on a model, then you can be confident that there is a single point of view from
which all the constructs make sense. In our discussions, we use the word model,
but I don't think we have attempted to construct one in the denotational sense.
We may not need to, if we can agree that our constructs are consistent, but if
we can't agree, then to progress, we may need to construct at least a
substantial subset of the model.
3b.
Aside from this methodological challenge, we also have a procedural one. We have
an increasingly-detailed proposal for how to offer new attributes using a
specialization-like mechanism. We don't have the equivalent for an alternative,
so we cannot compare.
If we
are concerned with making a correct decision, we may need to explore "the road
not taken" a little bit to see where it leads. That is what Michael has been
raising as the high-cost (in terms of architecture time)
alternative.
For
example, suppose we had a syntax for declaring new attributes, but the values of
the new attributes were never lumped together into a single generalized
attribute. This would have the appealing effect of completely separating the
value spaces for the new attributes from one another and from the existing value
spaces. If the theory (in paragraphs 2 and 3 from the top above) is correct, we
don't need to achieve this separation. A second appealing effect would be to
reduce the overhead on the language customizer of specializing attributes. This
reduction may come anyway if Erik Hennum's program is adopted. A third effect, a
disadvantage, would be that there wouldn't be a systematic approach that
supports specialization for those who want it.
Looking down this road just that far, I don't see a
reason to pursue it. This despite my expectation going in that it would be a
productive road that we should pursue.
4. We
do have an early adopter effect that we should also acknowledge. We have wished
to make it the case that the DITA toolkit is not the authoritative unique
implementation of DITA. However, for those who are pursuing independent
implementations, the power of community development of ideas, together with the
rapid translation of those ideas into an implementation in the DITA toolkit,
makes it hard to maintain an alternative.
Best
wishes,
Bruce
Okay, until the
teleconference then Michael - since apparently you still misunderstand my main
point: I don't want to call it attribute specialization, but I do want to call
it element specialization.
Oh, but in the meantime, in looking back
over the email history of this issue I did find the voice of one real-world
potential user - Hedley Finger - to whom you replied 3/14/05 assuring him that
while customization was required for now, "attribute specialization" would
support the use case he outlines when it became available (my tendentious
emphases added):
My concern with the DITA %select-atts;
model is that only the attributes platform and audience are direct
equivalents to the conditions we currently use. The other eight or so will
all have to be shovelled into otherprops in an ugly and writer unfriendly
manner incompatible with Arbortext Epic or any other attrib-based
conditioning scheme.
What would be nice is for some way to be able to
specialize otherprops or some method of supporting arbitrary numbers of
conditional attributes. Interchangeability with other
processors is probably meaningless unless those processors could have
some data-driven way of interpreting specializations of otherprops or the
alternative mechanisms for supporting unlimited conditional
attributes.
In fact, considering the
needs of future external processors, you might consider a general
solution that would allow a processor to understand arbitrary conditional
attributes, perhaps with some standardized XML structure to describe
conditions and the truth values of the attribute values, e.g. "us;au;uk;nz".
Then otherprops would simply be a reference to this standard XML
structure, perhaps as an external file available to applications, e.g. help
browsers, accounting programs, log-on routines,
etc.
I hope the voices of such users
don't get lost among those of all the IBM DITA architects on the call. At
least as I hear them, they seem quite cavalier about whether this mechanism be
considered specialization or not, so long as it supports the addition of
unlimited "arbitrary conditional
attributes."
--Dana
Michael Priestley wrote:
If you don't want to call it
specialization, I finally get to legitimately use that reply to Chris that
you questioned, where I explained all the issues of creating a new mechanism
that has to implement all the same virtues as specialization without being
specialization.
I also question
the suggestion that there is no real-world demand for multiple-level
specialization. IBM does use DITA. I do represent IBM. My positions are
grounded in that reality.
I
think this discussion probably needs to be continued in the context of the
dedicated telecon, where hopefully we will hear a range of voices beyond the
two of ours. I do agree we are at loggerheads, and I don't see a way out of
the logjam without other contributing voices (hopefully an odd number of
them :-).
Michael
Priestley IBM DITA Architect and Classification Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
Okay, thanks for clarifying that Michael: you do reject
Chris's compromise - or maybe everyone at IBM does.
Then I guess
we're at loggerheads - particularly because I see no need to call this
"specialization" in 1.1.
This first I can trace coming across this
was in an Indi Liepa email of 4/5/05: "Same issue as that raised by others
relating to how to extend DITA attribute set."
And as I recall until
very recently the proposal referred to "extensible attributes."
And
given the lack of real-world user demand for anything more, I see no danger
in adopting Chris's compromise solution in 1.1 - and much more danger adding
a notion of "attribute specialization" to the architecture before we fully
understand what we're getting ourselves into.
--Dana
Michael Priestley wrote:
>So far nothing you have said has contradicted that. If it
has, then maybe I haven't understood you. Let me know.
My issue with Chris's proposal
is that I'm not convinced we can call something specialization if it is
limited to one level, has no inheritance structure, and no generalized
processing support. And that if we allow this, and call it specialization,
we will be compromising the value of our general statements about
specialization. Sorry if that wasn't clear.
In addition, Chris's proposal
depends on the props attribute not being directly authorable, which makes me
concerned: it would mean that a base topic without domains could not be
conditionally processed, and I would regard conditional processing as
something that should be enabled even in the simplest case.
In terms of post-1.1
work, you already know what I consider to be essential aspects of
specialization (including multiple levels of inheritance, generalization
round-tripping, comparability of doctype differences eg for conref, and
processability of the generalized form). If you want to call it marsupial
inheritance, go ahead - the point is it works and is part of the current
DITA value proposition, so I would hope that any proposal you come up with
would respect those traits.
I'll defer to Erik to provide URLs for his
speculative work on potential futures for DITA specialization, it sounds
like you have a lot to talk about :-)
For 1.1, though, I'm hoping I've made my concerns
clear with respect to the dangers of a too-limited solution. Speaking at
least for IBM, we do see use cases for specializing more than one level, and
see some value in being able to identify the semantic relationship between
e.g. jobrole and audience, or operatingsystem and platform; and I do have
concerns about the impact on the architecture of not allowing that
expressiveness.
Michael Priestley IBM DITA Architect and Classification Schema
PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
Yes, I do think I
understand your position, Michael - I guess I just disagree with it, as you
apparently disagree with mine.
But beyond these now hardened
positions, I think I'm trying to propose a compromise - or rather, second a
compromise Chris Wong has proposed - that would be a win-win for both of us:
you could continue to think of this as severely restricted attribute
specialization, I could consider it as severely limited element
specialization, and users would get what they wanted when they requested
this enhancement - nothing more - in a 1.1 timeframe.
So far nothing
you have said has contradicted that. If it has, then maybe I haven't
understood you. Let me know.
As for adding new attributes to elements
as a species of element specialization, I've been thinking about that a lot
the last few days, and maybe I should share with you more of where I'm
coming from architecturally - though please don't allow this to take the
limited discussion of what to do right now in 1.1 off track.
I've
lately been thinking that we should consider adding new attributes to
elements without changing the element's name or invoking a specialization
hierarchy as a kind of genetic mutation: highly adaptive in a particular
environment, but alas - incapable of being inherited by the element's
specialized children.
If you want to add attributes that will be
inherited by the element's progeny, then you need to change the name and
insert the element in its specialization ancestry the usual way.
The
fallback behaviour of an element that has new attributes is the behaviour of
that element before the attributes were added, in all cases. What else would
it be?
On the other hand, since DITA parents are so solicitous of
progeny that they allow, like some strange kind of uber marsupial, all their
specialized childred - even the mutants - to crawl back into the pouch for
protection so they can undergo processing in a generalized form before being
respecialized back out again - we provide at least one - maybe only one -
universal attribute to preserve them inside the body of their
parents.
That at least is the most rigorous way I can combine the
metaphors that structure DITA and XML and have them make
sense.
--Dana
Michael Priestley wrote:
Our alternatives
are: - call it
specialization, but change the meaning of specialization (ie for extension
of universal attributes, specialization will be one-level only, and will
require a virtual base attribute, and the base attribute cannot be used on
its own - no fallthrough, no inheritance, no simultaneous existence of child
and ancestor elements) - don't call it specialization, and change the meaning of DITA
(ie have a new extension mechanism, and recreate all of the specialization
infrastructure around it). - call it specialization, and enable multi-level inheritance and
generalized value processing
I believe my stance has been completely
consistent - targetting a narrow scope for 1.1 that meets the architectural
definition of specialization for the specific kind of attribute extension we
know is highest priority. Identifying that narrow scope did require a lot of
hard thinking, and I'm definitely feeling some pressure from you to discard
that thinking, which I'm resisting because, as I've said repeatedly, I
believe the alternatives are worse.
Do you understand my position? I feel like
I'm repeating myself, and that probably means we're talking past each
other.
Michael Priestley IBM DITA Architect and Classification
Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
But specialization of
what...element or attributes?
If done as specialization of
attributes, I disagree: it *does* have major architectural implications -
not only in DITA, but in XML itself - since in the XML spec, as I've
previously said, attributes are always subordinate to a particular element,
and don't float free as things in their own right.
And at least as
I've been hearing the discussion, this enhancement has been pitched lately
as adding a new and unprecedented specialization method for attributes -
requiring you, as you've several times admitted, to think long and hard
about how that would make the most sense.
--Dana
Michael Priestley wrote:
1) I have never claimed this proposal is a general solution
for attribute specialization. The title has changed over time to try to
remove ambiguity ("metadata attributes" was the original title, but that
caused some confusion) but the core requirement has always been
extensibility of conditional processing attributes.
2) It has no major architectural
implications IF it is done as specialization, which at least in other
contexts has always implied unlimited levels and generalized
processability.
Michael Priestley IBM DITA Architect and Classification
Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
Fine. Then why
try to claim this proposal amounts to a full-fledged proposal for something
unprecedented in DITA - attribute specialization, something appropriate to
be unveiled in a major release - when it's really just a narrowly targeted
fix to allow for additional universal attributes to be added to elements -
in particular, conditional processing attributes - that has no major
architectural implications?
That's the kind of proposal that would be
appropriate for a point release like this.
Now when our users
requested this enhancement, did any of their requests include anything that
couldn't be handled by the compromise scope Chris Wong proposed? Was there
anything in these requests that would require the more extensive scope added
in the last couple of weeks?
If not, then why are we engineering
stuff into this feature that our users aren't even demanding
yet?
Perhaps a better approach would be to limit the enhancement to
Chris's scope for now - but provide a "possible future direction" note in
the spec that would explain and preserve the expanded scope - and ask any
users out there who need this further enhancement to contact us before the
next point release so we can further consider it?
If no one contacts
us, then we don't need to engineer this feature any further.
And yes
please, could you send me urls to Erik's presentations on general attribute
specialization? I'm also interest in seeing the most elegant solution
possible adopted for the general case, without undue
complication.
--Dana
Michael Priestley
wrote:
We actually did do
exploratory work on full attribute specialization, and there have been some
thought experiments undertaken by Erik Hennum on what a full solution might
look like - I'll defer to Erik to provide URLs to some of his
presentations.
But the very minimum that would be required to allow
per-element new attributes would be per-element tracking of new attributes,
which would mean a new universal attribute for tracking attribute ancestry -
effectively, adding something like the domains attribute to every element.
This would affect generalization, conref, and domain integration rules
substantially, in a way that the current much more limited proposal
avoids.
Also, keep in mind that the number one requirement for the
next release of DITA is not the ability to add arbitrary new attributes on a
per-element basis: it is the ability to define new conditional-processing
attributes. So I think we are addressing requirements in the order
prioritized for us by our users, as well as in the order that requires the
least architectural rework in a point-release of the standard.
Michael Priestley IBM
DITA Architect and Classification Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
True, Michael
- but currently in DITA specialization is something that applies to elements
alone, not attributes.
And I guess what I'm really resisting is the
attempt to use this feature to define a new kind of specialization for
attributes alone, before we really understand what we're doing.
As
Paul has repeatedly pointed out, in XML attributes are properties of
elements - they have no independent existence of their own.
An
attribute of the same name can, in XML, have a different datatype, and
different optionality, even a different list of enumerated values, depending
on the element with which it is defined.
The fact that we can use a
parameter entity to define a collection of universal attributes and put that
entity in the attlist of every element has, I think, started to blind us to
the fundamental architectural dependence of every attribute on the element
who attlist defines it.
Now I admit that we've gone too far down this
road to get specialization of elements through new attributes by any other
method than the one we're pursuing here in a 1.1 timeframe - and as such I'm
happy to go along with the compromise scope Chris has proposed.
But
if we had it to do over, I think we would have been better off to enhance
element specialization by adding new per-element attributes first, before we
defined enhanced element specilization by adding new universal
attributes, as we are attempting to do now - in my most charitable
construction of the proposal.
--Dana
Michael Priestley wrote:
You're right, I'm shying at shadows. Chris is not proposing
to ditch specialization. But he is proposing to limit specialization in ways
that make me question whether it's still specialization:
- currently in DITA,
specialization means any number of levels - currently in DITA, generalization means
creating a version of the content that conforms to the ancestor type
declarations while preserving the processable semantics of the descendant
declarations -
if our conditional processing support doesn't meet these definitions, can we
still call it specialization?
We do have a design currently proposed that
allows any number of levels, and describes how to process the attributes in
their generalized form. And I don't think the argument that it compromises
WYSIWYGness is a strong one, given the edge-case status of someone directly
editing generalized content.
So I'm resisting increasing the scope,
because I think we're already stretched to the limit in what we can cover in
this feature for 1.1, but I'm also resisting decreasing the scope, inasmuch
as that compromises the existing published statements about specialization
and generalization.
Michael Priestley IBM DITA Architect and Classification
Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
I don't
follow Michael.
How does limiting the scope as Chris suggests amount
to "ditiching specialization?"
It still provides a mechanism for new
conditional attributes through the props
attribute.
--Dana
Michael Priestley wrote:
If we introduce a new extension
mechanism that is not specialization, we will need to consider, among other
questions:
-
how are the extended values preserved during generalization? are they even
affected by generalization? if yes, isn't it specialization? if no, haven't
we just broken our entire extensibility/interchange model? - how is the use of these
attributes signaled to processes that care about doctype differences, eg
conref? or are they ignored? if ignored, how can we tell whether two topic
types are truly content-model compatible? if not ignored, do we add the info
to the domains attribute? if yes, isn't it specialization? if no, do we need
another architectural attribute?
Specialization is designed to solve a whole
range of processing implications to reconcile customized doctypes that need
to be interchanged. If we ditch specialization for this case, those problems
get bigger, not smaller. If we ditch both specialization and stop caring
about the problems it solves, then we break most of the promises that have
been made about DITA in its charter, spec, etc.
As it says in the
proposal, this is for conditional processing attributes (which are
universal), and for arbitrary tokenized universal attributes. Our
requirement for 1.1, as ranked by both the TC and by public input, is to
provide a mechanism that allows new conditional attributes. We allowed in
the arbitrary tokenized universal attributes as a "if we enable it for
conditional attributes, the same logic will apply to other attributes that
have the same occurrence pattern and syntax, so it's free for that
case".
I am
honestly trying to solve as small a problem as possible, without breaking
DITA's basic architectural promises. That's why it's limited to only two
cases, that's why we ditched the scope and negative value use cases, that's
why I'm continuing to focus on attribute type specialization and not
attribute value specialization, but I don't think making the problem so
small that it excludes specialization is possible without the entire
solution becoming something other than DITA.
Michael Priestley IBM DITA
Architect and Classification Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
I agree with Chris's take on the
appropriate scope for 1.1.
While I admire Michael's desire to realize
the larger promise of specialization in this feature immediately, I think
that would be more appropriate in a 1.2 or even 2.0 timeframe, when we've
all had a chance to consider the implications fully.
We're already
limiting general attibute extensibility to NAME values so it can be
accomodated by the simplified syntax originally proposed for conditional
attribute extensibility. Yet now we're busy complicating the conditional
case considerably, raising the question of why a dedicated syntax for the
general case was judged out of scope originally.
Also the model
proposed for full-fledged attribute specialization here is appropriate only
to conditional attributes. If we are going to include a coherent and
consistent approach to specialization for attributes as part of this
proposal, it should apply to all kinds of attributes, not just conditional
processing ones.
--Dana
Chris Wong wrote: I wouldn't be so quick to dismiss
authoring requirements, Michael. Authors do like to see a reasonable preview
of their conditional text. This implies reconciling @props and the
specialized attributes and all the complexity in @props. Even if the
authoring tool implements this, writers themselves will not be isolated from
the complexity of trying to understand why certain text is hidden/shown. If
the authoring tool only implements conditional processing or profiling on
the actual attributes, then you have the divergence between
preview/authoring output and final output.
Chris
From: Michael Priestley [mailto:mpriestl@ca.ibm.com] Sent: Tuesday, April 25, 2006 10:11 AM To:
Chris Wong Cc: dita@lists.oasis-open.org Subject: RE: [dita] attribute extensibility -
summary
Chris, in a separate
reply I've addd my own concerns about scope creep for 1.1, but it does
differ from yours. I do still think we need conditional processing logic
that will match against the generalized form as well as the specialized
form. I posted two scenarios to the list earlier that described cases where
this could be necessary, and it is an existing promise of specialization
that I am reluctant to break in the context of attribute specialization, for
numerous reasons (eg it's actually useful functionality; it's consistent
with other behaviors; it makes it difficult to talk about specialization's
general capabilities if we have exceptions and caveats all over the
place).
In
terms of the specific processing for props, Rob A's proposal has a
reasonably clear discussion of the implications I believe, and I'm hoping
you've had a chance to read it. His proposal reduces the generalization
nesting to just one level, which is sufficient to distinguish different
dimensions/axes of attributes (which affect processing logic) without
necessitating recursion.
.If this is too complex for your applications, perhaps we
could distinguish between required behaviors for different kinds of
application:
- the generalized syntax is not intended to be directly
authorable, and need not be supported by authoring applications - the generalized syntax is
intended to be a way to resolve differences between applications that share
common processing pipelines, and so processing pipelines/final output
processes should respect/interpret the generalized syntax
Would that
help?
In
specific response to your suggestion below that props be a virtual
attribute, I do think there are cases where props will have values authored
in it directly (eg when a DITA specializer has only one set of conditions to
worry about), but I don't think that should complicate the logic beyond
hope. Here's what I believe the logic would be, for a generalization-aware
conditional processing app (Robert, correct me if I'm wrong):
- processing app checks
ditaval to get list of attributes and values to check (eg audience,
"programmer") -
processing app opens a document, and checks domain att to get list of
ancestors and children of the given attribute (eg props=ancestor of
audience, jobrole=child of audience), and revises the list of attributes to
be checked (eg props, audience, progammer) - processing app checks each attribute for the
values given (eg "programmer") - if an ancestor attribute has a subtree labelled
with the given attribute (eg props="audience(programmer)") then evaluate
that subtree as if it were an attribute - if the given attribute or any of its children
have either directly contained values or subtree values that match the given
one (eg "programmer"), evaluate the attribute or attribute subtree in
question.
This is complex, I agree, but I don't think beyond hope - and
it only needs to happen for the pipeline case, and never affects authoring,
and provides specialization-aware interoperability which is consistent with
our existing behaviors and messages about DITA and
specialization.
Michael Priestley IBM DITA Architect and Classification
Schema PDT Lead mpriestl@ca.ibm.com http://dita.xml.org/blog/25
I was catching up on this discussion
(thanks for this summary, Bruce) and as I waded through the emails I'm
getting a sense of dread and panic. Guys, have you considered how scary and
complex this is becoming? When you start to see something resembling LISP
code in your attributes, maybe there is some overengineering going
on.
The main motivation behind this feature is to simplify
conditional processing. We already have a mechanism in DITA 1.0 to extend
metadata axes by stuffing everything into @otherprops. Nobody uses it.
People only want to work with attributes. Michael, you did distinguish
between authoring complexity and processing complexity, but the two are not
easily separable the moment anything goes into @props. Conditional content
can be expressed in both @props and its specializations, meaning two
attributes can be complement or conflict. Authors/editors/publishiners
have to reconcile or debug the specialization chain, even if they are
working at a generalized level.
What should specialized metadata axes
mean in a generalized form? If I am working with -- and understand -- only a
generalization of some specialization, I would not know what to do with all
those strange things in @props.
May I suggest the following to
simplify common usage?
- @props shall be the magic
specialization bucket. It is used only to facilitate
specialization/generalization transforms, and shall be ignored
otherwise.
- @props shall not at any time
contain metadata of interest to the current level of
specialization/generalization. Any relevant metadata shall be in
specialized metadata attributes.
- Apart from @props, metadata
attributes shall not contain complex expressions needing
parenthesis.
- Conditional processing -- whether
authoring or processing -- shall use only real metadata attributes and
ignore anything in the magic @props bucket.
Under this scenario, it no longer matters how
complex @props becomes. The only time we worry about its content is during
specialization or generalization, where specialization-aware transforms
should understand its complexity anyway. The rest of us mere mortals who
want to implement, author or publish DITA with conditional processing will
only have to work with the actual attributes. Existing tools for conditional
processing -- even non-DITA tools -- that work off the attributes will be
right at home.
My apologies for jumping in like this. I have not had
the time to participate in your discussions, and I have no intention of
derailing your current thread of discussion. But I hope you will consider
the need to simplify usage in the common case.
Chris
From: Esrig, Bruce (Bruce) [mailto:esrig@lucent.com]
Sent: Tuesday, April 25, 2006 8:44 AM To: 'Michael
Priestley'; Paul Prescod Cc: dita@lists.oasis-open.org Subject: RE: [dita] attribute extensibility -
summary
Here's an attempt
to summarize what's open on attribute extensibility.
Names just indicate a primary
contact for the issue, not necessarily someone who signed up to resolve
it.
Bruce Esrig
====================
Issues: (1) Four kinds of
extension: (1a) Simple extension with a new attribute
(1b) Pure specialization where
values are pooled among certain attributes (1c) Structural specialization where values
are treated as separate for a newly specialized attribute (1d) Special DITA
sense of specialization, where the rules are adapted for the needs of the
specializer (2) How to implement an evaluator for specialized attributes (Rob
A.) (3)
Whether to allow values to specify the mode of specialization that they
intend (Paul P.) (4) Logic, such as not, but also re-explaining and/or behaviors
for the extended feature (Michael P.)
This is clearly a very rich space of
issues. In our discussion on Thursday, we made a lot of progress in defining
what we need to consider. As a team, we haven't yet formed a time estimate
of how long it would take to resolve enough of these issues to have a
definite proposal for DITA 1.1.
Here's a possible approach (Bruce's own thoughts)
to resolving the issues.
1. Agree that all attributes can be
conditional. 2. Agree on which extension mechanisms are supported and, in the
language and architecture, where they appear.
3. Establish a preliminary
agreement on how to indicate which kind of extension mechanism applies to an
attribute. 4a. Clearly describe the current logic based on the new
understanding. 4b. Determine what the evaluator would do to implement the
resulting suite of mechanisms, assuming it could recognize them. 5. Establish a
complete syntax description for the extension mechanisms sufficient to
support the needs of the evaluator, both in the specialized form and the
generalized form. 6. Agree on what additional logic to allow.
7. Determine impacts of the
additional logic on the syntax and the evaluator.
|