Okay, I understand (theoretically) the ambition to
make specialization something more than just an easy way to customize.|
But what's wrong with providing both? Why shouldn't DITA be easy to
customize, where customization is application specific and willing to
be ignored everywhere else?
And I think making DITA easier to customize with respect to attributes
is a big deal to the community of potential users - the following from
another list I'm on where someone asked people to summarize their take
* What is the worst thing about using DITA?
-- You have to break DITA (or add to it) to do anything useful with
attributes. The DITA committee is developing a solution to this right now.
The problem is that you can't add arbitrary attributes like you can with
Michael Priestley wrote:
It would be accomplished using an
redefinition, same as with domains.
I think the theoretical advantages
substantial. Under the current model, specialization modules are plug
play: we can determine by inspecting the class and domains attributes
modules are needed for a document type, compare constraints/modules
document types, automatically determine lowest common denomenators
different document types, and generally make information exchange
on an automatic level across document type boundaries.
If we allow attributes to be part of
a specific custom DTD, not part of a specialization, then we los the
to move that information up the framework or identify which attributes
have been added. Instead of having a document type that follows
rules that can be automatically compared and ultimately automatically
we have a custom DTD that must be built by hand, customized by hand,
migrated by hand. In other words it no longer operates as part of a
because something outside the framework has its hooks in it.
If you view specialization as just a
way to make customization easier, what you're saying makes sense. But
isa lot more than that. The advantages of specialization, as described
above, are based on thought experiments rather than experience simply
DITA is in its early days and there are only a few dozen
floating around, at different levels of completeness and at different
But if you follow the thought experiment forward, and think of what
when we have hundreds of specializations across industries and
communities and want to manage the differences and commonalities in a
and scalable way, specialization delivers what customization cannot: a
way to automatically inspect, compare, and reconcile those differences
without loss of information or loss of processing capability.
That works for me too - though given the
only theoretical utility of roundtripping, I'd prefer a more easily
option as well.
Again, why not an empty parameter entity (dtd) and attribute group
that you could put anything in all all, and which would be discarded on
Attributes should actually be easier to specialize than elements, not
there's no content model to enforce. So why not throw them wide open?
processing can simply ignore the additions.
Michael Priestley wrote:
I was thinking roughly the same thing, although perhaps with "meta"
as the generic ancestor, parallel with "props". If we are
willing to restrict the normal content of "meta" to be simple
tokens (ie simply don't allow parentheses except in the generalized
then we could use the exact same model for generalizing/roundtripping
attributes. Effectively we'd have one generic ancestor attribute for
processing attributes, and one for anything else. They could also share
the same XSLT library for unpacking the conditions if processing is
in the generalized form (any process that can't handle the generalized
form would be considered specialization-unaware).
I propose the following:
a) We make a new attribute called "otherattrs" (like otherprops
just for selection/filtering)
b) We make a new issue for specializing the "otherattrs" attribute
c) We to synchronize the generalization/specialization mechanism for
"otherattrs" and "props"
In thinking about this, it seems not too difficult at a first
approximation. The main two issues are:
1. Escaping paren characters that would otherwise be confused for
* This can be solved
by having an escaping mechanism like "two
paren characters resolve to one, three resolve to two etc. A paren
character alone represents an end-of-attribute marker"
2. Keeping track of which attribute values have ALREADY been generalized
so that we don't end up escaping the value over and over again (or
unescaping it wrongly).
* This can be solved
with an architectural attribute that lists
the attributes that are already generalized.
So, for example, I could specialize "otherattrs" with an attribute
that represents the last-changed-date for an element.
Generalized, that might look like:
Still to think through:
a) does this handles multiple levels of specialization well?
b) is there a requirement to handle multiple levels of specialization?
c) what does the processing (e.g. XSLT or CSS) look like to handle
d) is there a more elegant solution than "generalizedprops"?
looking at the domains in scope after generalization?
For me, the answers to questions a-c are also not clear yet for
Michael's current proposal.