OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Ideas for XSLT/Xpath Test Suite


 
Hi all:
 
    NIST is interested and in developing a test delivery system which may have an execution component to it.  This execution component is intended to be a tool to facilitate test execution and not an
assestment of any processors.  NIST as well as the XSLT/Xpath committee
is not interested in assessing, evaluating, validating, certifying or otherwise
measuring any implementation for the purposes of making any judgements.
 
Our past testing experience does indicate to us that "if you build they
will come" (from that famous movie).  It is our belief that the easier
it is for users to customize, view or execute the test suite, the more
likely will be that any test suite will be used.  
 
Other matters do require our attention at this point
------------------------------------------------------------------------------------------------------------------------
    Specifications file - I propose a file with the following format (pseudo -code)
 
            <TEST specifications = "XSLT or XPATH" category = "Test
                       category" set_switches = "YES or NO">
                 <TEST_NAME>submitterX/.../testName.xsl</TEST_NAME>
                 <TEST_INPUT>submitterX/.../inputFile.xml/</TEST_INPUT>
                 <SPECS_SECTION>x.yy</SPECS_SECTION>
                 <DESCRIPTION>Test description</TEST_DESCRIPTION>
                 <EXPECTED_RESULTS>submittorX/.../results.html(or .out?)
                 </EXPECTED_RESULTS>
                 </REMARKS>Any special remarks? or "N/A"</REMARKS>
            </TEST>
 
        where
             submitterX  - name of  test submitter (i.e. NIST, Lotus, etc.)
             specifications - XSLT or XPath accordingly
             set_switches  - A "YES" or "NO" value that indicates whether or not this test
                                   is prone to produce more than one behavior, i.e. this test may
                                   be removed during toggling.
 
             TEST_NAME = a test name (including the file path) for the test file.
             TEST_INPUT = path indicating where (or which) ".xml" file to use.
             SPECS_SECTION = section of specifications tested.
             DESCRIPTION - textual description of test.
             EXPECTED_RESULTS  = path of file containing vendor supplied
                                                    expected  results.
             REMARKS - any special remarks pertaining to this test or "N/A" if none.
 
---------------------------------------------------------------------------------------------------------------
     Normative (human readable) HTML Test Report
 
                                                     Test Report
 
                      Test Name: name
                      Specs. Section: section
                      Description: test description
                      Input Source - Clickable item to display input data on demand
                      Expected Results - Clickable Item to display results on demand     
                                                   (expected output)
                      Remarks: Any pertinent remarks to this test (present only if any
                                      remarks are specified)
 
-------------------------------------------------------------------------------------------------------------
 
  Execution Test Results (Harness results, execution component)
 
                                                Test Execution Results
                                                 Processor: IUT
                                                 Tested Subset - subset tested (see tests
                                                                            categories below)
                                                   Date: The date

                      Test Name: name
                      Specs. Section: section
                      Description: test description
                      Input Source - Clickable item to display input data on demand
                      Execution Results: string containing Processors results
                      Expected Results - Clickable Item to display results on demand    
                                                   (expected output)
                      Remarks: Any pertinent remarks to this test (present only if any
                                      remarks are specified
---------------------------------------------------------------------------------------------------------------------
 
    Normative XML rendition report
 
                        should really look the same as the normative entire XML file
                        (only smaller)
----------------------------------------------------------------------------------------------------------------
 
Test Categorization - As I suggested at the meeting last June 6, I think the following
system works for just about all possible tests (this pretty much follow the specs
organization)
 
                                                       XSLT Tests
 
                 Stylesheet Structure Tests
                     Namespace, Stylesheet element,  Literal result element,
                     Qualified names, Forwards-compatible processing,
                     Combining stylesheets
 
                 Data Model Tests
                      Root node children, Base URI,
                      Unparsed  entities,  Whitespace stripping
                 
                  Template Tests
                      Template rules, Named templates
 
                   Result Tree Tests
                       Creating elements and attributes,
                       Creating text, Creating processing instructions,
                       Creating comments, Copying, Computing generated text,
                       Numbering
 
                  Data Manipulation Tests
                       Repetition, Conditional processing
                       Sorting, Variables and Parameters, Additional functions
    
                  Extension Tests
                       Extension elements, Extension functions
                  
                  Message-Fallback Tests
                       Message, Fallback
 
                  Output Tests
                         XML output, HTML output, Disabling output escaping
 
                    
                                                           Xpath Tests
 
                    Location Path Tests
                          Location steps, Axes, Nodes tests, Predicates,
                          Abbbreviated syntax
 
                   Expression Tests
                          Basic, Function calls, Node-sets, Booleans,
                          Numbers, strings, Lexical structure,
 
                   Core Function Tests
                          Node set functions, String functions, Boolean functions,
                          Number functions
 
                  Data Model Tests
                          Root node, Element nodes, Attribute nodes, Namespace
                          nodes, PI nodes, Comment Nodes, Text Nodes
---------------------------------------------------------------------------------------------------------------------
 
Any comments, suggestions, changes etc to any of the above items are
welcome and open for discussion.  I know there are a number of other details
that must also be addressed, hopefully by July 6 there will be none.
 
When I get back from Paris (week of June 19th), I will start developing some kind of a
working prototype (which may include parts of the harness) open for discussion
and hopefully get an agreement on these basic issues.
 
Greetings,
 
Carmelo Montanez                  
                            


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC