[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: [tag] Re: TAG Proposal on weak predicates
Patrick: inline <JD>
From: Patrick.Curran@Sun.COM [mailto:Patrick.Curran@Sun.COM] Sent: Wednesday, July 09, 2008 11:07 AM To: Durand, Jacques R. Cc: OASIS TAG TC Subject: [tag] Re: TAG Proposal on weak predicates We discussed this briefly during today's teleconference. This allowed me to
collect my thoughts on the matter.
As I understand the arguments, you're suggesting that because it may be difficult to test a particular assertion with the tools or test framework that's available, it is therefore appropriate to "weaken" the assertion such that it is testable. Having done so, you argue that it is therefore necessary to tag the assertion as "weak". I strongly disagree with this approach. <JD> I am surprised, because in June 25th meeting, after discussion
on this matter in the meeting where Lynne and I presented such TA
examples, you agreed that it was acceptable to use predicates that do not
match exactly the normative statement, provided that this "distance" is made
explicit in the TA (so that the TA user is aware of this).
(note:
I could not post in time the minutes of this meeting - due to vacation +
travel - but will do)
Whether or not one is able (in
the example you give, "willing" seems a more appropriate term) to test a
normative requirement in the specification should not influence the
derivation/identification of the appropriate assertion. Assertions should
reflect the spec exactly.
<JD> I think the "should" is key here in your statement !!
I note
that you do not use MUST ;-)
My
concern is a pragmatic one:
-
often enough (I have seen this in three specifications I have written TAs
for), TA writers are aware of what "testable" means in their context, and they
want to - or are asked to - write TAs that comply with this "testability" -
which is after all an important property of TAs in the definition we use: "...
statement of behavior, action or condition that can be measured or tested. Each is an independent, testable statement of a normative requirement
...")
If all
normative statements of a spec were without question testable or measurable, a
TA definition would not even have to remind that a TA [predicate] be testable.
Yet, all TA definitions I have seen mention "testability". That tells me that
many TA writers and TA users believe that some knowledge of the test
environments is what adds value to TAs compared to simply identifying the spec
requirements.
(Of course, a spec that
contains a significant number of requirements that are untestable or difficult
to test is a poor specification, since in practice implementations will tend to
differ from each other in these areas. The very process of developing test
assertions can help to identify such cases, and if performed early enough in the
development cycle, feedback can be provided to improve the spec.)
<JD> Maybe a "significant number",
but how about "a few" ? Our guideline must also address these few "hard to test"
or "untestable" cases, that all specs I have seen so far, exhibit. The
mere fact that we argue about who should worry
about "testability" (TA writers? Test suite writers? spec writers?)
tells me that this is a potential major hidden issue that is more
central to this TA guideline than we thought. I suggest we add a "Testability"
section (~1 page) in our guideline.
This section would deal with all the fuzziness
behind the notion of "testability" , which is precisely what a guideline is
about, i.e. best practices rather than exact science). Because testability is a
relative concept dependent on what test constraints are assumed, I suspect that
our guideline might have to consider a few
cases:
(a) the TA writer assumes that it is always
possible to derive test case(s) from the TA, given the right test environment.
In other words, it is assumed there is no restriction on the execution of test
cases.
(b)
the TA writer works under "testability" constraint, and writes TA with these
testing constraints in mind.
I suspect that neither (a) nor (b) is wrong...
As I said, the list of assertions should exactly match the normative requirements in the spec. When it comes to testing, judgment calls are always made. Some assertions are not tested at all. Others are partially tested. Some are "completely" tested. In the example you give, the assertion would be partially tested. There is clearly value in annotating an assertion list with information about what is and what is not tested, and about the thoroughness of the testing that is performed. Such annotations seem to me to be a perfect example of "test metadata", and therefore out of scope for our document. In conclusion, I see no need for the interpretation qualifier, at least in this case. Durand, Jacques R. wrote:
--------------------------------------------------------------------- To unsubscribe from this mail list, you must leave the OASIS TC that generates this mail. Follow this link to all your TCs in OASIS at: https://www.oasis-open.org/apps/org/workgroup/portal/my_workgroups.php |
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]