I was supposed to start this
thread last week but fell down on that. Better late than
never.
We need to discuss how we want
to handle testing of the RNG -- what policy do we want to have
in place, who is doing the work, and (if needed) how can that
work be repeated by others. I think it's important that
whatever our Official Process becomes, anybody should be able
to set it up and repeat it with minimal work. That was not the
case with DITA 1.2, where everything relied on a long series
of tools and scripts on my own system.
High points - here are the
things I did while testing with 1.2:
* Kept an XML rendering of each
doctype
* For each new feature:
** Integrate the new change
** Verify that the DTD still
parsed (run through my generally very picky Omnimark parser,
open in a validating editor)
** Verify that the desired new
markup was there
** Regenerate the XML version
of the DTD, and do a diff to ensure no unintended changes
* Repeat for each new feature
For 1.3 I think Eliot has
already been doing some of this - validating with parsers, and
ensuring that the new markup is available.
Do we want to keep up the "make
sure no unintended consequences" test, and if so, how? I think
this is much more difficult with doctypes that are already
essentially complete (it's easiest when checking as each
feature is added).
Do we have tools that can do
other DITA based validation -- ensure that the specialization
is correct, maybe catch a Learning and Training element that
has an incorrectly constructed class attribute, etc?
Who here wants to sign up for
testing of RNG, DTD, or XSD? We don't want to be wasting time,
but it might be a good thing if we're doing some of this
testing with different parsers, for example -- I've found
things in the past that opened OK in Arbortext, while Omnimark
threw out an error, or vice versa.
Thanks,
Robert D Anderson
IBM Authoring Tools Development
Chief Architect, DITA Open Toolkit (http://dita-ot.sourceforge.net/)