[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: RE: [tag-discuss] Re: a few questions to kick-off the discussion
David: Inline <JD> -----Original Message----- From: david_marston@us.ibm.com [mailto:david_marston@us.ibm.com] Sent: Sunday, November 19, 2006 9:10 AM To: tag-discuss@lists.oasis-open.org Subject: [tag-discuss] Re: a few questions to kick-off the discussion I think that a TC should only be formed if it intends to produce a design for machine-processable test assertions, which I presume means an XML vocabulary. This doesn't need to be a new vocabulary (or "tag set" if you prefer that term) if an existing one, or combination of existing ones, can be made to work. I will soon be investigating SBVR, which probably has useful ideas. <JD> looking forward to hearing more about this investigation. My 2 cents: Personally I would like to produce / endorse a mark-up that matches whatever TA model the group comes up with. I am aware though that this places the "general applicability" bar a little higher... some users may like our TA model, but because of very specific objectives for representing/processing their TAs they may not like our mark-up. The objectives behind a mark-up should be clearly stated (a standard source representation for XSLT-driven displays? More than this?) </JD> >Should we claim applicability to * all* OASIS specs? >how to cooperate with other orgs (W3C, WS-I?) in order to produce a guide that is consensual beyond OASIS. I was on the earlier OASIS Conformance TC, which was looking beyond OASIS. I think the scope should be: Produce a system for expressing test assertions for any specification whose implementations can be tested by automated-testing software. This means that it applies to most OASIS and W3C specs, but not necessarily all, and applies to specs from many other "standards" bodies. The specs do not have to specify software or data, but can apply to anything that can be manipulated by software. <JD> that seems to be a cautious statement. An interesting notion is that indeed, a test assertion cannot make entirely abstraction of the way things will ultimately be tested (at least in my experience), even if it is far from being as procedural as a test case. It assumes somehow a particular test environment - or at least some architecture traits. Note that I am not happy to say this, as if true that would ask more from TA writers than what they may be willing to think about. Should it be part of the TA guide to require a [succinct] description of the test set-up? More to think about. </JD> >Should the abstract model be ?branched? into more concrete sub-models more appropriate for different types of tests / types of specs? I think that we should assume that branching will happen. Try to be universal, but when the model reaches a division point, branch. <JD> Probably the kind of question that can be answered only after hitting a few tough cases. I too would strive for an overarching model, that gives a single, unifying entry point to the notion of TA. Having to later add instances / branches to it, would not be as embarrassing as having to add a brand new item to a disparate set of models... </JD> >Can we identify cases where TAs have been automatically generated from the spec material? Sure, but also cases where the spec material was automatically generated from the TAs, and cases where the two can be reconciled easily when placed side-by-side. >the scope of the deliverable... I think we need to discuss TA identifiers and how they would be used. In particular, existing work on test case metadata would probably want to intermix with TAs, but not necessarily in a structure as simple as one test case exercising one TA (and referencing just one in its metadata). <JD> relationship (and differences) between TA and test case need be spelled out. We have often seen a N-1 relationship from test case to TA, less commonly the reverse. I'd be interested in looking at samples on test case metadata. Of course we have to be careful of what we reuse here - test cases are even more averse to generalization... </JD> .................David Marston IBM Research --------------------------------------------------------------------- To unsubscribe, e-mail: tag-discuss-unsubscribe@lists.oasis-open.org For additional commands, e-mail: tag-discuss-help@lists.oasis-open.org
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]