OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Initial reaction to attempted test annotation


I just converted a group of test cases to have annotation that
is compatible with the WoodenMan proposal. I used the pointers
from our customized XSLT specs for the most part.

As a forward-looking design, the TinMan version of the test
cataloging will refer to the type of marker as "OASISptr1"
for "OASIS pointer type 1", allowing us to develop better
pointers in the future and have compatible naming. In the
last proposal, it was just "OASISptr", but it's a good idea
to anticipate change, just as the XSLT spec itself does.

The tests that I annotated all concerned xsl:number, so there
was nice locality of reference. I found that I had to scan
the customized spec thoroughly before beginning, and I had to
pick some pointers to represent certain concepts. So when I
have a test case that explicitly says level="single", I note
place="id(number)/ulist[2]/item[1]/p[1]/text()[1]" though
several text() portions could have been cited. If there is no
explicit mention of level, and it defaults to single, I note
place="id(number)/ulist[1]/item[1]/p[1]/text()[5]", the
beginning of the sentence that says single is the default.
My current thinking is that we can only compare and meld
different test suites if we generate maps of the specific
fragments that we want to stand for particular concepts. The
numbering tests typically had 3-5 different citations, mainly
because xsl:number always defines level, count, and from
behavior, even when defaulted. I didn't bother to cite the
default for format and others, except when that was the whole
point of the test. If I hadn't written all these tests, I
would have taken extra time to decide what was being tested.

Sometimes, one paragraph in the spec has many separate parts
that are testable, but no font changes and hence no internal
markers. I dread having to look at sets of tests that would be
gathered under such broad citations. Sections 7.7 and 7.7.1
aren't too bad for getting particular sentence fragments of
interest. It's a little hard to say when you're testing the
separator tokens, though.

The pointers are way inscrutable in the cataloging context.
They look very much alike. You can look at them yourself if
you want to do a CVS check-out from Apache's repository. We
may want to invent summary sentences for the ones that we
decide to use.

After this exercise, I wonder if this kind of annotation will
pay off. It certainly is more machine-friendly than
human-friendly, and I probably shouldn't make presumptions
about how the machines feel. A good next step would be to
discuss how we can use the spec markers to establish when a
particular paragraph is well covered in the consolidated
test suites.
.................David Marston



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC