OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

dita-adoption message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: FYI: Whitepaper Proposal - Best Practices for Testing Content Converted into DITA


Hi all --
I am in the middle of a migration at work. I drafted this checklist for our internal testing. Do you believe that something along these lines would be useful as a whitepaper?
==================================================
=======================================================
    OASIS DITA Adoption Technical Committee
    Whitepaper Proposal: Best Practices for 
   Testing Content Converted to OASIS DITA 1.3
           PART I - Manual Testing
- - - - - - - - - - - - - - - - - - - - - - - - - - - - 
                  Stan Doherty
=======================================================

This whitepaper would focuses on best practices for manual testing of converted DITA content. Manual testing is most effective as a diagnostic exercise early in the migration and conversion process. Manual testing should yield to automated acceptance testing (Schematron) before the converted content is staged for production deployment.

Manual testing takes two forms:

1. Positive testing: Teams evaluate whether the converted XML sources and transformed
   content conforms to what they expect.

   The whitepaper would provide examples of what to look for and how to measure 
   success.

2. Negative testing: Teams deliberately inject errors into the XML sources and assess
   whether the system reports the errors and (ideally) identifies corrective actions.  

   The whitepaper would provide examples of what to inject and how to measure 
   the impact on an information set.

A. POSITIVE TESTING
-------------------------------------------------------------------------------------
   1. Validation in the DITA editor
      a. Are all the maps valid?
      b. Are all the topics referenced from maps valid?
      c. Are all key definition maps valid?
      d. Are all the library topics valid?
      e. Are all the @conrefs and @conkeyrefs to keys and shared library content
         valid?
      f. Are glossary topics valid and available for referencing?
      g. Are all references from parent maps to subordinate maps valid?
      h. Are all references from maps to subordinate topics valid?
      i. Are all topic-to-topic cross-references valid?
      j. Are all web cross-references valid?
      k. Are there any empty maps or topics?
      l. Are there any empty elements in maps or topics?
      m. Are there any spurious control characters, extended 
         ASCII characters, or character entities in the XML?


   2. Validation in the DITA Open Toolkit (local builds)
      a. Does the DITA root map build successfully in:
         - Oxygen WebHelp
         - XMetal WebHelp
         - PDF (fop)
         - HTML5
      b. Did the PDF and HTML output include all the content
         referenced by the root map and subordinate maps?
      c. Did all the conditionally-tagged content build 
         correctly?
      d. Did all the cross-references between DITA topics resolve
         correctly.
         > For HTML, running a link checker is required.
         > For PDF, a review of the DITA-OT build logs is a must.
      e. For manuals, did all the bookmap metadata generate the
         correct prelims, cover, lists, appendices, and chapter  
         breaks?

   3. Content audit in DITA editor
      a. Are all the in-topic block elements tagged correctly?
      b. Are all the in-topic inline elements tagged correctly?
      c. Are the <xref>s and <link>s tagged appropriately, i.e.
         with no hard-wired link text?
      d. Are all the images scaled and placed correctly?
      e. Are vector diagrams tagged correctly?
      f. Are the tables structured correctly? 
      g. Are conditional tags applied correctly and consistently?
      h. Are there content gaps in running text?

B. NEGATIVE TESTING
-------------------------------------------------------------------------------------
   1. Validation in the DITA editor
      a. Does the editor automatically report validation errors
         when maps and topics are deliberately broken?
      b. Does the editor automatically report validation errors
         when references between maps, topics, libraries, and 
         keymaps are deliberately broken?
      c. Does the editor automatically suggest fixes for broken
         references? 
      d. Does the editor report broken references across the 
         domain?
  


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]