Subject: Re: [oiic-formation-discuss] What are we validating and why?
2008/6/6 Bart Hanssens <firstname.lastname@example.org>: >> So - if part of what we're after here is developing that big picture of >> requirements for a functional scripting that makes page content checking >> a snap - hell yeah - we need that! > > I was considering jOpenDocument or OdfToolkit + Groovy, but I'll take a > look at CAM :-) > > > Anyway, checking *rendering* will be the most difficult part to automate > (this is a mayor issue for end users) and some parts might be ambiguous > / not defined in the spec. > (like Rob said, this calls for an "ACID-test" for ODF) Two issues there. 1. Should we attempt to compare look and feel of (something) between implementations? I'm not yet convinced it's required or feasible. 2. If required it should be in the spec. 3. I object to this group talking about scripting 'glue'. that's very clearly a how? At most we might spec that: Tests shall seek visual comparison wrt all paras requiring it in 1.2 and specify how comparison is to be made objectively. If we start to state 'how' tests are linked, then we're doomed. You use xyz glue, I use Java, someone else uses ant. I think we can identify requirements (perhaps numbered) against which a test specification can be mapped. So if Rob implements it, I can go from this groups output (3.4) to a list of test requirements (3.4-1 to 3.4-59) through to absolute test numbers 567-776. That would complete the test matrix. We just need to imagine the mapping. What shall be tested What may be tested What output is expected (hopefully mostly objective!) Any subjective tests might lie in the nice to have domain. If I make time this weekend I'll try and start something from the draft 1.2 (I'm assuming that's the target) to see if my ideas hold together if that helps. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk