OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

xslt-conformance message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Subject: Revised Review & Submission Policy docs


All comments on these two documents are incorporated.  Here's a final pass
for comments.  Look especially closely at the first four Guidelines in the
Submission doc; two are new, added based on David's comments.

I've both included the text of both docs in the body of this email (now in
html format) and attached the .html files and the .css file.  By placing
these attachments in a single directory (they must be in the same directory
or the .html will not be able to find the .css style sheet) and browsing the
html files, the files will be formatted by a call to the .css file.  Any
last comments on content or style?

***********Submission doc****************
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">

<html>
<head>
	<title>Submission Policy</title>
	<link rel="stylesheet" title="conformance-style"
href="conformance-style.css">

</head>

<body>
<p class="title">Submission Policy </p>

<p class="subtitle">Introduction</p>

Since the World Wide Web relies on technological interoperability, the
need arises, both for vendors and for product users, for testing product
conformance to the W3C specifications.  The objective of the OASIS
XSLT/XPath Conformance Committee ("Committee") is to develop a test suite
for use in assessing the conformance of XSLT processors to the technical
specifications contained in the Recommendations of the W3C (called the
"Specification" in this document).  The full text of
this Submission Policy and its companion, the Review Policy, are
available online at <a
href="www.oasis-open.org/committees/xslt">www.oasis-open.org/committees/xslt
</a>.  The Committee
welcomes submissions of test cases from all vendors
and other interested parties.  Tests will be considered for inclusion in
its test suite (according to the Review Policy) on a case-by-case basis.
The Committee will will work toward thorough coverage by accumulating
submitted tests.  The quality and comprehensiveness of these test
submissions will determine how robust the test suite will be.

The Committee encourages all test submissions.  The purpose of these
Guidelines is
to inform Submitters of what the test suite is meant to do and which tests
are more likely to be included in the test catalog, given these design
criteria.
The Committee also encourages Submitters to prepare follow-up submissions,
including repairs to individual tests and significant test expansions.

<p class="subtitle">Submission Guidelines</p>

The first seven Guidelines define the scope of the test suite.  <p></p>

<span class="section">1. Submitters' tests should test only a single simple
requirement in the
Specification.</span><br>

In a comprehensive test suite, each testable assertion in the Specification
should be tested
independently of each other assertion, to the extent possible. These
assertions are often the
result of taking one or two key sentences from the Specification, but
considering effects and
conditions described at various other places in the Specification.
Citations (see Guideline 3)
point to portions of text that may not be as precise as testable assertions,
while a
purpose statement for the test (see Guideline 2) can say exactly what is
being singled out in
that test case.

If a test follows the first Guideline above, one failure points out a
singular instance where the
processor does not conform to the Specifications (called "non-conformance"
in this document).
Failure of several cases may point out a single failure if one can readily
identify the common
element of all the failing cases. One non-conformance may cause the failure
of dozens of tests that
involve various invocations of the non-conforming situation.<p></p>

<span class="section">2. Submitters' tests must be accompanied by a simple
(not compound) statement
of the purpose of the test.  </span><br>
Each test must have just one purpose, and, ideally, that purpose should be
unique within the test suite.
The purpose statement should state the requirement in the Specification,
whether this requirement is found
in a single testable assertion, or as several statements within the
Specification.  The purpose
statement helps Reviewers and test labs to understand the Submitter's
intention in submitting a
particular test.
<p></p>


<span class="section">3. Submitters' tests must include at least one
citation pointer to the
Specification.</span><br>
Recommendation citations are in the form of XPath expressions to testable
statements in the XML
working group source documents producing the HTML W3C documents.  [shouldn't
this be: "... to testable
statements in the Specification." ?]  Many requirements are contained in a
single testable
statement in the Specification.  There should be one citation pointer for
each statement in the Specification
that make up a requirement.  Requirements that are found in two (or more)
sentences in the Specification
should include two (or more) citation pointers.

<p></p>
<span class="section">4. The tests should target specific XSLT/XPath
language issues.</span><br>
The tests should be aimed at the language features and versions that are
included in the Specification.  Issues that cause parser errors or that
involve other W3C specifications that are out of scope for the current
test suite should not be included.  If submitted, the Committee may not
run or include tests involving parser issues or errata of the Specification.
Tests
whose point is to reveal mistakes on parsing the input or on serializing
the output should be excluded.<p></p>

<span class="section">5. The tests should target "must" provisions of the
Specification, not
"should" provisions.</span><br>

The Specification contains some assertions (or requirements) that are
mandatory ("must") and
some that are optional ("should").  For the version 1.0 of the test suite,
the Committee is concerned with "must" requirements.  "Should"
provisions are the discretion of the implementer.  While the Committee
welcomes submissions of all kinds, those testing "should" provisions may
not be included in final test results.  <p></p>

<span class="section">6. A test should target only explicit processor
choices, not unspecified
areas of the Specification.  </span><br>

There are areas of the Specification that do not specify what a
processor needs to do, so it is impossible to test for what they
actually do.  In other areas the processor is given a choice regarding
how it behaves.  The remaining areas are unconditional required
behaviors.

The suite will differentiate test cases based on choices made by the
Submitter.  The Reviewers need to know if a test corresponds to a
particular choice made available to the processor.  (These will be
enumerated in the information included with the catalogue document
model).  The completed test suite will test that portion of The Catalog
of Discretion that is deemed "testable" and where a question or two can
clearly elicit the choice may by the developer.<p></p>

<span class="section">7. Later versions of the test suite may allow a wider
range of tests.</span><br>

Although, as noted in Guideline 2 above, Version 1.0 of the test suite
will include tests of single Specification assertions of in-scope
language issues, later versions of the test suite may include a broader
scope of issues and may choose to include a wider range of tests. <p></p>

<span class="section">8. The Committee reserves the right to exclude any
test submitted.</span><br>  Tests
submitted to Version 1.0 of the test suite may be rejected if they do not
comply with these guidelines.

Please see the Review Policy for a full description on how the Committee
will judge eligibility of a test (<a
href="www.oasis-open.org/committees/xslt">www.oasis-open.org/committees/xslt
</a>).  <p></p>


<span class="section">9. In those instances where a Submitter has a test or
tests within its overall
submission whose creator(s) will be making a separate submission, the
Submitter should filter out those tests so they are not submitted twice.
</span><br>

The Submitter should send the tests it created, plus any tests others
created
that are both 1) free and clear for such use and 2) that the Submitter
doesn't
believe the Committee will have already.<p></p>


<span class="section">10. The tests will become public. No royalties will be
associated with their
use.</span><br>

The Committee intends to retain the personal names of
Submitters so they may get public credit for their work.

</body>
</html>

****************end***********

***********Review doc********************
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">

<html>
<head>
<title>Review Policy</title>
<link rel="stylesheet" title="conformance-style"
href="conformance-style.css">
</head>

<body>
<p class="title">Review Policy </p>

Reviewers should refer to the submission guidelines in the Submission
Poliicy, available online at <a
href="www.oasis-open.org/committees/xslt">www.oasis-open.org/committees/xslt
</a>. The
tests in version 1.0 of the Conformance Test Suite should fail when the
processor is non-conformant with a single "must" provision (see
Submission policy Guideline 1 of the Specification) in scope.  All
accepted tests are intended to test conformance to the Specification on the
basis of output.  To the extent possible, Committee Reviewers should remove
tests whose reference result constitutes interpretation of the
Specification, unless the
test is cataloged with a Committee-approved gray-area designation..  This
will result in equal
application of Review Policy criteria by all involved, thus producing a
consistent and quality work product.

Differences between Submitter and Reviewer output will be examined by
the Committee, which will reach consensus to 1) accept the test, 2)
reject the test or 3) defer deciding on the test while the issue is
forwarded
to the W3C for clarification.  (See Review Procedure 6 for more details.)

<p class="subtitle">Review Procedures</p>

<span class="section">1. At least two Reviewers will check off on each test.
Only the assessment
of a single member is required for the test to be included in the draft
release.</span><p></p>

<span class="section">2. Ineligible tests (by definition) should be
rejected.</span><br>
Eligibility is the quality by which a candidate test submitted by a
submitter is judged to determine whether it ends up in the test suite as
published by the committee.

<p></p>
<span class="section">3. Eligibility should be judged by the following:
</span><br>

<p class="subsection">3.1 The accuracy of the test. <br>
<span class="text">Accuracy of a test is determined by a judgement by the
Reviewer.  Accuracy is defined
as the extent to which the test case actually tests what the submitter
states the test
case tests.  Accuracy is measured against the baseline of the cited parts of
the
Specification.  If it does not match, or only partially matches, the test
should
be considered inaccurate.
This determination is made by the Reviewer's interpretation of the
Recommendation, and
if necessary, the opinion of the Committee as a whole, and if necessary, the
definitive
assessment by the W3C Working Group. </span></p>
<p class="subsection">3.2 The scope of the test. <br>
<span class="text">See the Submission Policy for a definition of the scope
of the test suite.  </span></p>
<p class="subsection">3.3 The clarity of the test. </span><br>
<span class="text">Clarity of a test is a determination of whether the
aspect being tested is clearly
described with the anticipated results acceptably explained.</span></p>
<p class="subsection">3.4 The clarity of aspect of the Specification being
tested.<br>
<span class="text">The Test Suite aims to test parts of the Specification
and errata that aren't vague.</span></p>
<p class="subsection">3.5 Should/shall use in the Specification.<br>
<span class="text">This is the same as "must" and "should", discussed in the
Submission Policy.  The test
must clearly address a requirement in the Specification that is a "shall"
requirement and
not a "should" requirement.</span></p>
<p class="subsection">3.6 Determination of whether a test testing a
discretionary item.<br>
<span class="text">The Committee has
developed a Catalogue of Discretion items, which includes a listing of all
options given to
developers of the technology in the Specification.  See the website for a
list of
discretionary items (www.oasis-open.org/committees/xslt).  Not all
discretionary items are
testable.</span></p>
<p class="subsection">3.7 The simple or compound nature of the test<br>
<span class="text">Simple and compound tests are described in the Submission
Policy.</span></p>

<span class="section">4. Judge each eligible test through a
process.</span><p></p>

<span class="section">5. Run each test through multiple processors.
</span><br>
<span class="text">Although there is no reference implementation, the
Committee will form consensus on which of the
prominent processors to use.  The baseline is unanimity of their results, as
reduced to infoset-equivalence.</span><p></p>

<span class="section">6. Differences between infoset-equivalence of
Submitter and Reviewer output will trigger examination by
the Committee. </span><p></p>

<span class="section">7. The Committee will then reach consensus opinion to
accept the test, reject the test, or defer deciding
on the test while the issue is forwarded to the W3C for clarification.
</span><br>

A test can be rejected by Reviewers even if all prominent processors give
the same result when the test
is not a true conformance test. If Reviewers think it's a good test but
different reliable XSLT processors
give different results, issue may be Specification verbiage, processor bugs,
or unclear.

There are several possible (non-exclusive) actions:

<p class="subsection">7.1 Reject the test and update errata and errata
exclusion<br>
<span class="text">The test would then be excluded from the published
collection.  The Test Suite control file
dictating which submitted tests are excluded from the published collection
is updated.
Furthermore, issuance of an erratum actually gives us a way to include the
test case, subject
to filtering out at the final stage of "rendering" a suite for a particular
processor.
</span></p>
<p class="subsection">7.2 Reject the test with advice to go to W3C.<br>
<span class="text">In this case, the Submitter thinks the test is accurate
and the Committee agrees the test
is not accurate and the Recommendation is clear enough that we needn't
bother the W3C with
an interpretation issue.  Rejection requires consensus of the Committee.

This scenario begins when the Submitter looks at the Committee report and
sees that a particular
case submitted was excluded and writes to ask why.  The Reviewer will
respond to explain. The
response includes reference to the W3C's mail alias for questions about the
Specification.</span></p>
<p class="subsection">7.3 The test case is forwarded to W3C for
clarification<br>
<span class="text">If the above options do not avail, the Committee can
forward the test to the W3C for clarification.  </span></p>
<p class="subsection">7.4 Additionally, the Committee may wish to
accommodate external comment from the community at large.</span></p>
<p class="subsection">7.5 The Committee will publish a consensus opinion of
response to comment with justification
from Recommendation (not just precedence of how a processor has acted).
</span></p>

<span class="section">8. During the testing process, Reviewers will do the
following: </span><p></p>

<p class="subsection">8.1 A Reviewer will report to the list the hierarchy
of tests undertaken for comparison with
multiple processors.  </span></p>
<p class="subsection">8.2 A tally of tests will be tracked on a visible web
page for the Committee.</span></p>
<p class="subsection">8.3 Reviewers report that a batch of tests in a given
hierarchy has been examined, including a summary
of findings of tests not to be included in the resulting suite. </span></p>
<p class="subsection">8.4 A given hierarchy is not considered complete for a
final
release until reports from at least two members have been
submitted.<br>  <span class="text">A given hierarchy may be included in a
draft only
after at least one member's report is submitted.</span></p>

<span class="section">9. During the testing process, the Committee will
invite public review: </span><p></p>

<p class="subsection">9.1 An initial suite of a very small set of files will
be used to test procedures and scripts
and stylesheets. </span></p>
<p class="subsection">9.2 The Committee will publish draft work
periodically, starting with very small set. </span></p>
<p class="subsection">9.3 The Committee will solicit comments on usability
of the product. </span></p>
<p class="subsection">9.4 The Committee will publish a disposition of
comments. </span></p>
<p class="subsection">9.5 The Reviewers will continue reviewing test cases
until all categories are covered.</span></p>

</body>
</html>

*************end******************
Title: Submission Policy

Submission Policy

Introduction

Since the World Wide Web relies on technological interoperability, the need arises, both for vendors and for product users, for testing product conformance to the W3C specifications. The objective of the OASIS XSLT/XPath Conformance Committee ("Committee") is to develop a test suite for use in assessing the conformance of XSLT processors to the technical specifications contained in the Recommendations of the W3C (called the "Specification" in this document). The full text of this Submission Policy and its companion, the Review Policy, are available online at www.oasis-open.org/committees/xslt. The Committee welcomes submissions of test cases from all vendors and other interested parties. Tests will be considered for inclusion in its test suite (according to the Review Policy) on a case-by-case basis. The Committee will will work toward thorough coverage by accumulating submitted tests. The quality and comprehensiveness of these test submissions will determine how robust the test suite will be. The Committee encourages all test submissions. The purpose of these Guidelines is to inform Submitters of what the test suite is meant to do and which tests are more likely to be included in the test catalog, given these design criteria. The Committee also encourages Submitters to prepare follow-up submissions, including repairs to individual tests and significant test expansions.

Submission Guidelines

The first seven Guidelines define the scope of the test suite.

1. Submitters' tests should test only a single simple requirement in the Specification.
In a comprehensive test suite, each testable assertion in the Specification should be tested independently of each other assertion, to the extent possible. These assertions are often the result of taking one or two key sentences from the Specification, but considering effects and conditions described at various other places in the Specification. Citations (see Guideline 3) point to portions of text that may not be as precise as testable assertions, while a purpose statement for the test (see Guideline 2) can say exactly what is being singled out in that test case. If a test follows the first Guideline above, one failure points out a singular instance where the processor does not conform to the Specifications (called "non-conformance" in this document). Failure of several cases may point out a single failure if one can readily identify the common element of all the failing cases. One non-conformance may cause the failure of dozens of tests that involve various invocations of the non-conforming situation.

2. Submitters' tests must be accompanied by a simple (not compound) statement of the purpose of the test.
Each test must have just one purpose, and, ideally, that purpose should be unique within the test suite. The purpose statement should state the requirement in the Specification, whether this requirement is found in a single testable assertion, or as several statements within the Specification. The purpose statement helps Reviewers and test labs to understand the Submitter's intention in submitting a particular test.

3. Submitters' tests must include at least one citation pointer to the Specification.
Recommendation citations are in the form of XPath expressions to testable statements in the XML working group source documents producing the HTML W3C documents. [shouldn't this be: "... to testable statements in the Specification." ?] Many requirements are contained in a single testable statement in the Specification. There should be one citation pointer for each statement in the Specification that make up a requirement. Requirements that are found in two (or more) sentences in the Specification should include two (or more) citation pointers.

4. The tests should target specific XSLT/XPath language issues.
The tests should be aimed at the language features and versions that are included in the Specification. Issues that cause parser errors or that involve other W3C specifications that are out of scope for the current test suite should not be included. If submitted, the Committee may not run or include tests involving parser issues or errata of the Specification. Tests whose point is to reveal mistakes on parsing the input or on serializing the output should be excluded.

5. The tests should target "must" provisions of the Specification, not "should" provisions.
The Specification contains some assertions (or requirements) that are mandatory ("must") and some that are optional ("should"). For the version 1.0 of the test suite, the Committee is concerned with "must" requirements. "Should" provisions are the discretion of the implementer. While the Committee welcomes submissions of all kinds, those testing "should" provisions may not be included in final test results.

6. A test should target only explicit processor choices, not unspecified areas of the Specification.
There are areas of the Specification that do not specify what a processor needs to do, so it is impossible to test for what they actually do. In other areas the processor is given a choice regarding how it behaves. The remaining areas are unconditional required behaviors. The suite will differentiate test cases based on choices made by the Submitter. The Reviewers need to know if a test corresponds to a particular choice made available to the processor. (These will be enumerated in the information included with the catalogue document model). The completed test suite will test that portion of The Catalog of Discretion that is deemed "testable" and where a question or two can clearly elicit the choice may by the developer.

7. Later versions of the test suite may allow a wider range of tests.
Although, as noted in Guideline 2 above, Version 1.0 of the test suite will include tests of single Specification assertions of in-scope language issues, later versions of the test suite may include a broader scope of issues and may choose to include a wider range of tests.

8. The Committee reserves the right to exclude any test submitted.
Tests submitted to Version 1.0 of the test suite may be rejected if they do not comply with these guidelines. Please see the Review Policy for a full description on how the Committee will judge eligibility of a test (www.oasis-open.org/committees/xslt).

9. In those instances where a Submitter has a test or tests within its overall submission whose creator(s) will be making a separate submission, the Submitter should filter out those tests so they are not submitted twice.
The Submitter should send the tests it created, plus any tests others created that are both 1) free and clear for such use and 2) that the Submitter doesn't believe the Committee will have already.

10. The tests will become public. No royalties will be associated with their use.
The Committee intends to retain the personal names of Submitters so they may get public credit for their work.
Title: Review Policy

Review Policy

Reviewers should refer to the submission guidelines in the Submission Poliicy, available online at www.oasis-open.org/committees/xslt. The tests in version 1.0 of the Conformance Test Suite should fail when the processor is non-conformant with a single "must" provision (see Submission policy Guideline 1 of the Specification) in scope. All accepted tests are intended to test conformance to the Specification on the basis of output. To the extent possible, Committee Reviewers should remove tests whose reference result constitutes interpretation of the Specification, unless the test is cataloged with a Committee-approved gray-area designation.. This will result in equal application of Review Policy criteria by all involved, thus producing a consistent and quality work product. Differences between Submitter and Reviewer output will be examined by the Committee, which will reach consensus to 1) accept the test, 2) reject the test or 3) defer deciding on the test while the issue is forwarded to the W3C for clarification. (See Review Procedure 6 for more details.)

Review Procedures

1. At least two Reviewers will check off on each test. Only the assessment of a single member is required for the test to be included in the draft release.

2. Ineligible tests (by definition) should be rejected.
Eligibility is the quality by which a candidate test submitted by a submitter is judged to determine whether it ends up in the test suite as published by the committee.

3. Eligibility should be judged by the following:

3.1 The accuracy of the test.
Accuracy of a test is determined by a judgement by the Reviewer. Accuracy is defined as the extent to which the test case actually tests what the submitter states the test case tests. Accuracy is measured against the baseline of the cited parts of the Specification. If it does not match, or only partially matches, the test should be considered inaccurate. This determination is made by the Reviewer's interpretation of the Recommendation, and if necessary, the opinion of the Committee as a whole, and if necessary, the definitive assessment by the W3C Working Group.

3.2 The scope of the test.
See the Submission Policy for a definition of the scope of the test suite.

3.3 The clarity of the test.
Clarity of a test is a determination of whether the aspect being tested is clearly described with the anticipated results acceptably explained.

3.4 The clarity of aspect of the Specification being tested.
The Test Suite aims to test parts of the Specification and errata that aren't vague.

3.5 Should/shall use in the Specification.
This is the same as "must" and "should", discussed in the Submission Policy. The test must clearly address a requirement in the Specification that is a "shall" requirement and not a "should" requirement.

3.6 Determination of whether a test testing a discretionary item.
The Committee has developed a Catalogue of Discretion items, which includes a listing of all options given to developers of the technology in the Specification. See the website for a list of discretionary items (www.oasis-open.org/committees/xslt). Not all discretionary items are testable.

3.7 The simple or compound nature of the test
Simple and compound tests are described in the Submission Policy.

4. Judge each eligible test through a process.

5. Run each test through multiple processors.
Although there is no reference implementation, the Committee will form consensus on which of the prominent processors to use. The baseline is unanimity of their results, as reduced to infoset-equivalence.

6. Differences between infoset-equivalence of Submitter and Reviewer output will trigger examination by the Committee.

7. The Committee will then reach consensus opinion to accept the test, reject the test, or defer deciding on the test while the issue is forwarded to the W3C for clarification.
A test can be rejected by Reviewers even if all prominent processors give the same result when the test is not a true conformance test. If Reviewers think it's a good test but different reliable XSLT processors give different results, issue may be Specification verbiage, processor bugs, or unclear. There are several possible (non-exclusive) actions:

7.1 Reject the test and update errata and errata exclusion
The test would then be excluded from the published collection. The Test Suite control file dictating which submitted tests are excluded from the published collection is updated. Furthermore, issuance of an erratum actually gives us a way to include the test case, subject to filtering out at the final stage of "rendering" a suite for a particular processor.

7.2 Reject the test with advice to go to W3C.
In this case, the Submitter thinks the test is accurate and the Committee agrees the test is not accurate and the Recommendation is clear enough that we needn't bother the W3C with an interpretation issue. Rejection requires consensus of the Committee. This scenario begins when the Submitter looks at the Committee report and sees that a particular case submitted was excluded and writes to ask why. The Reviewer will respond to explain. The response includes reference to the W3C's mail alias for questions about the Specification.

7.3 The test case is forwarded to W3C for clarification
If the above options do not avail, the Committee can forward the test to the W3C for clarification.

7.4 Additionally, the Committee may wish to accommodate external comment from the community at large.

7.5 The Committee will publish a consensus opinion of response to comment with justification from Recommendation (not just precedence of how a processor has acted).

8. During the testing process, Reviewers will do the following:

8.1 A Reviewer will report to the list the hierarchy of tests undertaken for comparison with multiple processors.

8.2 A tally of tests will be tracked on a visible web page for the Committee.

8.3 Reviewers report that a batch of tests in a given hierarchy has been examined, including a summary of findings of tests not to be included in the resulting suite.

8.4 A given hierarchy is not considered complete for a final release until reports from at least two members have been submitted.
A given hierarchy may be included in a draft only after at least one member's report is submitted.

9. During the testing process, the Committee will invite public review:

9.1 An initial suite of a very small set of files will be used to test procedures and scripts and stylesheets.

9.2 The Committee will publish draft work periodically, starting with very small set.

9.3 The Committee will solicit comments on usability of the product.

9.4 The Committee will publish a disposition of comments.

9.5 The Reviewers will continue reviewing test cases until all categories are covered.

body   		{background-color: #FFFFFF;
			font-family: "Times New Roman";
			font-size: 12pt;
			color: #000000;
			margin: 0px;  	}
a:link 		{color: #0000FF	}
a:visited	{color: #3333FF	}
a:hover		{color: #FFCC00	}
a:active	{color: #FF0000	}
p.title		{color: #000000; 
			 padding-top: 50px;
			 padding-bottom: 10px;
			 font-family: "Times New Roman";
			 font-size: 18pt; 
			 text-align: center	}
p.subtitle	{color: #000000; 
			 padding-top: 18px;
			 padding-bottom: 0px;
			 font-family: "Times New Roman";
			 font-size: 14pt; 
			 text-align: left	}
span.section	{color: #000000; 
			 font-family: "Times New Roman";
			 font-size: 12pt;
			 font-weight: bold; 
			 text-align: left	}
p.subsection	{color: #000000; 
			 padding-top: 0px;
			 padding-bottom: 0px;
			 padding-left: 20px;
			 font-family: "Times New Roman";
			 font-size: 12pt;
			 font-weight: normal;
			 font-style: italic; 
			 text-align: left	}			 
span.text		{color: #000000; 
			 font-family: "Times New Roman";
			 font-size: 12pt;
			 font-weight: normal;
			 font-style: normal; 
			 text-align: left	}


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]


Powered by eList eXpress LLC