OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

dita message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: DTD and XSD Generation Testing Assistance


Now that I seem to have the DTD and XSD generation working completely and
correctly it's time to focus on completing the testing part of this
system. This is a call for assistance with implementation.

The instructions for running the tests are in SVN here:

doctypes/test/how-to.dita

If the instructions are not clear or it's not working on your system,
please let me know.

There are some programming tasks that I need to do or get help with but
I'm still researching the base tools or approaches for those.

In the meantime, two tasks that anyone on the TC should be able to
contribute to are expanding the existing valid and invalid test documents:

1. Extend valid test documents

Right now I have documents for each TC-provided shell that should be
reported as valid by their respective shells and integrated modules.
 
Each shell-specific document serves two purposes:

A. Verify that the shell is generally usable (that is, it can be parsed at
all). This serves as a "smoke test" for the generation process.

B. Verify that specific content models or attribute rules are correctly
validated. 

The current documents are sufficient to test new 1.3 vocabulary or
important or problematic 1.2 rules (e.g., things that I was generating
incorrectly that I needed to verify fixes for).

However, these documents are not complete over the whole of DITA 1.3
vocabulary.

It would be ideal if we could expand the content of these documents to
include as many element types and useful variants of those elements as
possible, in order to provide a deeper and wider test over the content
models.

NOTE: I don't need more *documents*, I need more *elements* within the
documents that are there. The only time there needs to be more than one
test document for a given shell is when two markup options cannot coexist
in the same document, such as the choice between having just a shortdesc
or an abstract and a shortdesc, or testing different topic nesting options
for the root document. Otherwise, there is no value to having more
documents: the correctness test for a given shell is binary: either it all
works or it all fails. Because of how XML validation works, failure is
reported for the document and that is sufficient. The validation messages
will indicate what the specific problem was and that's all that's needed
to diagnose and resolve the failure. The normal case should be that
everything succeeds. So it doesn't help to have more documents: we're not
testing individual elements, we're testing the shells and modules as
integrated units

If you would like to contribute to the test documents, they are in SVN
under doctypes/test/1.3/basedocs/valid/

The file naming and organization scheme should be obvious. Note that the
base docs don't have any associated schemas so the best way to create the
test cases is to use your favorite editor and then cut and paste the
markup into the test document.

Feel free to commit updates to the existing test documents as long as
you've verified that they are valid (that is, that the markup you've added
should be reported as valid by correct 1.3 grammars).

2. Extend the set of invalid test documents

These documents are in doctypes/test/1.3/basedocs/invalid/

The purpose of these documents is to verify that things that should be
disallowed are in fact disallowed. The docs I have there now test specific
cases that were at one point generated incorrectly, so I created test
cases to verify my fixes. But there are a number of important constraints
that the DITA spec imposes that should be checked, including disallowed
topic nesting, lack of <keyword> in specific contexts, verifying that
constraints are actually doing what they should, attribute values allowed
in one context but not another, etc.

I currently don't have automatic testing for correct invalidity checking:
this requires a bit of code I haven't yet had a chance to write that
inverts the normal validity reporting so that valid documents are reported
as failures and invalid documents are reported as successes. This
shouldn't be hard to do with Java or maybe even with existing Ant features
(not sure), but since my current test evaluation mechanism uses log
analysis, any messages with "error" in them cause the overall test suite
to fail. 

So if there are important constraints you want to verify are being
imposed, feel free to update the invalid documents in SVN as needed. Only
add new documents if existing invalid documents cannot be used for the
invalidity check you want.


Thanks,

E.
—————
Eliot Kimber, Owner
Contrext, LLC
http://contrext.com





[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]