OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

tag message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [tag] AI: Grouping


Hi Stephen,

    Thanks for the re-write on this section.  It seems to address most of my concerns - regarding the 'explicitness' of list logic, the closed nature of lists for adding TAs, and Test Assertion Document.  This section is longer than previous, but it is much more explicit, contains examples, and has good glue to flow between the subsections -- So all of this is goodness.

    I am hoping that a definition for Test Assertion Document will also be placed in the glossary.  

    Below is a discussion about 'workflow of Spec Analysis' here at Sun -- you were asking for a description of this last week.

Thanks, Regards,
Kevin L


Regarding Workflows (background purposes)

You were previously asking about workflow for analysis (in Java Specs at Sun).   There are a few different workflows, typically depending on a few variables: 

1.) whether a specification is original or in revision
2.) Size of a spec (sometimes groups break apart specs and analyze different sections at different times).
.... (a few other variables)
Paul and Victor can comment further about this, as their groups are actually doing this.


In the workflow for an Original Spec:

Typically, a specification (or portion) of an original spec will carry some hierarchy (eg the 'VM portion of the Java Specification', or the 'Util portion of the Java Language Specification').  Within these portions, there also is some hierarchy, typically aligned with the natural package organization of the Java Language, then further by class/method organization.  TAs (as Sun sees them) are currently associated with this organization - so part of describing a TA is identifying the context where it belongs in the Spec.  

Second step is typically to describe (Sun's version of) TAs by identifying all of the relevant fragments of text per TA.

Third step is to annotate these TAs with appropriate descriptors (meta-data).  The types of Meta-Data may vary between different teams (associated with analyzing different portions of Specs).   Typically, 'testable/nontestable', 'ambiguous', ...  these meta-data are identified/attributed in this step.   Usually, this type of Meta-Data describes whether a test can be written from the TA.    Other types of Meta-Data can be identified later - this can be used for grouping tests (for data analysis) or for configuration/runtime control of tests.

If the workflow is for a Spec Revision:

The first step is typically to track differences in assertions within the original Spec, and the Revision Spec
    a.) Identify existing TAs (unchanged)
    b.) Identify TAs that have changed (significantly) - these become new TAs
    c.) Identify missing TAs (for tracking purposes)
    d.) Identify new TAs

For (a/b), Meta-Data is typically inherited from the analysis of the original specification.  This may be updated however
For (d) New TAs, the process is the same as the workflow for an original spec.

This analysis is often fedback to the original specification writers, and specifications can improve based on the analysis.

Sun typically works further with these TAs/Analysis  when test-suites are created.  Tests are associated with TAs, and coverage metrics are reported.  This gives Sun an indication of 'depth' of testing in a technology area, as well as 'completeness' of testing.   Meta-data associated with TAs can help this reporting as well.



So in relation to the discussion on Grouping,  there is a natural grouping of TAs by location (hierarchy) that happens when assertions are identified.  There is also a minimal set of Meta-Data that is typically assigned when assertions are identified as well (for purposes of test identification).  And, there is also some set of Meta-Data that can be assigned either for control or analysis purposes.  Grouping is useful for these purposes.






Stephen Green wrote:
92040e120809221028y62a09f72h6bbae181dfb8d8aa@mail.gmail.com" type="cite">
Here is some candidate text for the Grouping sections (see also in
wiki). It doesn't include yet the AI of providing text for a new
glossary item 'Test Assertion Document' but I've added a subsection by
that title to the Lists section.

<start/>

3.5 Test Assertion Grouping

When writing test assertions, as the specification is being analysed,
it is usual to group certain test assertions together, either as
having a special status, such as all accredited test assertions for a
given specification, or as sharing a particular characteristic, such
as a common category of test assertion target.

A special kind of grouping is the container of all test assertions
which belong to a particular specification or profile. Here the
container may be the specification document itself if it includes
within it the test assertions to be associated with it.

Ways to group test assertions assertions include two of special note:
- explicit listing of test assertions by their identifiers (section
3.5.1) and a more implicit grouping by a common but not unique
property such as the test tags names or tag values assigned to the
test assertions (section 3.5.2).


3.5.1 Lists (Dimensions of Variability)

To explicitly identify a group of test assertions they can be listed
explictly by the unique test assertion identifiers. This makes it
clear once and for all which assertions belong to that particular
group and which do not. In addition to such a list, the logical reason
which determines whether a test assertion is a member or not of that
list will need to be stated, at least to help with understanding and
maintenance of the list.

For example:

Test Assertion List Id: A001
List Description: all assertions describing 'Size' requirements
List members: TA001, TA002, ....., TA008

Note that although, in this example, we have avoided enumerating each
and every test assertion identifier by using an ellipsis ('...'), such
methods introduce a possible weakness. It might be overlooked during
maintenance of the test assertions or the test assertion list if a
later test assertion is given an ID of TA002a and therefore is
implicitly rather than explictly made a member of this particular
list. This may be a mistake since the new test assertion TA002a might
not relate to the list description of 'Size' requirements. A list is
completely explicit about every test assertion member when every
member test assertion is listed explicitly by its test assertion
identifier.

For example:

Test Assertion List Id: A001
List Description: all assertions describing 'Size' requirements
List members: TA001, TA002, TA003, TA004a TA004b, TA005, TA006, TA007, TA008

Such a list may be regarded as 'fixed', 'frozen' or 'closed'. A test
assertion added later with identifier TA005a, if it is to be included
in this list, would require a change to the list (with possible
version or change control implications) or the creation of a new list
(with a new list identifier, if a list identifier is included with the
list).


Test Assertion Document

A list of test assertions related to either conformance or
interoperability testing will need special care with respect to
version control and change management and therefore it will need to be
clear what criteria are being used to determine which test assertions
are members of the list and which are not. The special case of a
container for all test assertions related to a given specification or
profile, say, is a special example of a most explicit list, although
here the method used to define such a list may involve the use of
inclusion of the test assertion itself (rather than just its
identifier) within a special document or package. One way to create
such a list is to include all such related test assertions within a
document, which we might call a 'Test Assertion Document'. Other
synonymous terms might be 'Test Assertion List'. 'Specification
Analysis' or 'Test Assertion Set'. Note that the container of this
complete set of test assertions might instead be the document of the
specification or conformance profile itself, when test assertions are
included, say, within the text of the actual specification or profile.


3.5.2 Tags (Test Assertion Metadata)

Another way to define a group of test assertions is to use a
non-unique property of such assertions rather than just using their
unique identifiers in a list or containing the test assertions
themselves in a document. To this end, test assertions may be assigned
metadata in the form of non-unique 'tags' or 'labels'.

For example, test assertion 'widget-TA100-7' might be tagged as
'Size-Property-Descriptive':

TA id: widget-TA100-7
Target: widget
Normative Source: specification requirement 104
Predicate: [the widget] is from 5 to 15 centimeters long in its longer
dimension.
Prescription Level: medium-size:mandatory
Tag: Size-Property-Descriptive

Then it might be included in a list of test assertions related to
'Medium Size' requirements, along with other assertions, say, tagged
'Size-Related' but 'not' say with test assertions
'Small-Size-Related'.

Test Assertion List Id: A002
List Description: all assertions involved in describing 'Medium Size
Widget' requirements
List members: All test assertions with Tag 'Size-Property-Descriptive'
AND Tag 'Size-Related' AND NOT Tag 'Small-Size-Related'

This we have called a 'List' but it is in fact defined rather more
implicitly than if every member were listed by its identifier, as
described in the previous section (section 3.5.1). In fact a more
explicit and well-defined list might combine both tags and identifiers
to group the assertions:

For example:

Test Assertion List Id: A002
List Description: all assertions involved in describing 'Medium Size
Widget' requirements
List Description: All test assertions with Tag
'Size-Property-Descriptive' AND Tag 'Size-Related' AND NOT Tag
'Small-Size-Related'
List members: TA001, TA002, TA003, TA004a TA004b, TA005, TA006, TA007, TA008

So a tag is a further, optional test assertion element useful in
grouping test assertions. It may sometimes be useful to create tags as
name-value pairs. For example, tagging a test assertion:

Tag: WidgetSize=Medium

This would allow all test assertions related to requirements for
medium sized widgets to be grouped to facilitate, say, testing of just
the medium sized widgets or a conformance profile relating just to
medium-sized widgets.

Several such filters can be applied to the same set of assertions and
any given assertion can appear in more than one grouping.

Special consideration when using tags for grouping is to be given to
the stages in the workflow of test assertion authoring and maintenance
and subsequent use at which changes might be made to tags and their
values. Specially there may be the addition of new tags, perhaps by
adding metadata which is separate to the documented test assertion. If
metadata for test assertions is defined and maintained separately from
the test assertions it may be subject to an entirely different set of
version and change control rules and methodologies. In this case, a
distinction might need to be made between lists based tags which were
part of the original test assertion and those whose list membership
might be different to that which was known or expected at the tiome
the list was defined.

For example, consider a list defined using tags but without explictly
listing test assertion identifiers:

Test Assertion List Id: A002
List Description: all assertions involved in describing 'Medium Size
Widget' requirements
List Description: All test assertions with Tag
'Size-Property-Descriptive' AND Tag 'Size-Related' AND NOT Tag
'Small-Size-Related'

If TA004a is originally tagged 'Size-Related' but the workflow allows
it to be subsequently tagged 'Small-Size-Related', then there will
need to be rules defined which determine whether the test assertion is
still a member of List 'A002'.



<end/>


  



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]