[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: [oic] Groups - svn-metadata-barth.zip uploaded
"Hanssens Bart" <Bart.Hanssens@fedict.be> wrote on 04/07/2009 03:48:55 AM: > > Personally, I'd rather write the test case first, isolate the assertion > and discuss them at the same time (at the risk of building a test case > on the wrong assertion, but it helps me to get a better overview of the > end result and gives me something to throw at real implementations) > > Others may prefer to get the assertion approved first, and then write up > the test case. > I think you implicitly, or at least mentally, create a test assertion before you write a test case. The ODF standard doesn't test itself. Somehow you need to read the text, decide what requirement is stated, and then design test cases that test that requirement. These are two different skills. For example, suppose we are working on spreadsheet functions and we come across a function that says: "INT(x) returns the nearest integer whose value is less than or equal to x". A test assertion would be: TA_id: spreadsheet-function-int Target: an ODF Spreadsheet Consumer Normative source: ODF 1.2, Part III, Section x.y.z Prerequisite: The ODF Consumer supports Level I OpenFormula Predicate: When present in a table:formula attribute, INT(x) returns the nearest integer whose value is less than or equal to x Prescription level: mandatory Producing this test assertion is straightforward and amounts to interpreting the text of the standard and determining the prescription level. In some cases, like spreadsheet functions, this will be simple. In other cases it will require more effort to figure out how to define the predicate. Developing test cases from a test assertion requires a different set of skills. It is more a QA specialization -- how do you design a test? The craft of test design is to enumerate a small set of test cases that will find the maximum number of errors. So you typically test the main case, an error case, limits and edge cases, etc. Of the infinite number of possible test cases, which ones will find the most bugs? So we might end up with several test cases: assert(INT(1.0) == 1.0) // a positive integer remains the same assert(INT(-1.0) == -1.0) // an negative integer remains the same assert(INT(1.5) == 1.0) // positive numbers round down assert(INT(-1.5) == -2.0) // negative numbers round down assert(INT(0.0) == 0.0) // zero remains the same assert(INT(-0.0) == 0.0) // negative zero is the same as zero So I see these two steps as being different. To create the test assertions, someone reads the ODF standard very closely noting all testable provisions in the form of test assertions. In the 2nd step, someone crafts one or more test cases for each test assertion. The advantages of this approach are: 1) Different steps require different skill sets. ODF experts might be able to make fast progress on reading the standard and defining test assertions, whereas QA experts may work faster taking test assertions and crafting test cases for each test assertion. 2) The ODF standard defines conformance for documents as well as for applications. How do we test the conformance of an ODF document? Obviously, creating document test cases does not help here. You can't test a document with another ODF document. But having a set of test assertions related to ODF document conformance would be very useful, since that could be automated via other means, such as an ODF Validator tool. 3) Test assertions fulfill the part of our charter that calls for us to " produce a comprehensive conformity assessment methodology specification which enumerates all collected provisions, as well as specific actions recommended to test each provision, including definition of preconditions, expected results, scoring and reporting" -Rob
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]