[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Subject: Re: Requirements
> I'm not sure what you mean here. I thought that the configuration instance > would pull out of the normative collection that subset wherein all tests > would be positive and that a negative result in any test would indicate > non-compliance with the configuration's claim for compliance for a > particular area. Yes it will, still a user may still want to know which tests are negative, implementation-defined. A user may just want to run them separately or something like that. And of couse this information is provided to the user under our approach. > > > 6) The expected output to this test (if any). > > Do you mean prose, here, or actual markup? > > If we had transformation tools that supported XInclude we could point to > the result markup files as if they were text files and have their content > displayable. yup that will work. > > > 7) How do I run the tests? > > Is this David's scenario concept? Yup, also for the future (and I cant think of a scenario just yet), There could be tests that may not fit that scenario. > > 7) Description of the tests. > > Oh, is this the prose? The one-liner in the test (if any) and the description in the catalog file. Of course this is also already provided under our approach. > > > 8) Any secondary files needed by the tests. > > 9) Any known problems with any of the tests. > > Do you mean collectively or specifically with the given test. Would a test > that had a problem be included in the suite? Both, some problems (perhaps a missing character or something simple that may not keep a test from running or render it non conformant) may be a useful thing for a user to know Let's say that after we release the suite we discover that the expected output should be "Test Passed" but the actual output is "Test pased", will we remove that test altogether?, we can just report that as a problem instead of removing the test. Again that just an example (perhaps a bad one). > > > 10) How do I interpret the results? (are they self-explanatory?). > > 11) How do I report problems with any of the tests? > > 12) Are there any holes in the test suite? (any areas that > > are not addressed by any test). > > 13) When was this test written? > > 14) Who wrote this test? > > Could this be private? Absolutely, if the author so wishes. I for one don't mind my name be associated with a test, after alll I wrote it. > > 15) Any test that depends on another test. > > I don't think we will have this condition ... each test should (I think) be > independent. Me too, but if the situation arises and can't be avoided, then the user may want to know that. I can think back to the COBOL, SQL and Fortran test suite where that situation happened. > > > 16) Any test output that may be an input to another test. > > I think we should can the input to all tests and not have any input be > dependent on the result of another test ... another test's incorrect result > might render the given test's result meaningless. If all inputs were > canned, all results should be definitive. > I agree, but then again can we gurantee that this will always be the case. So if by some unavoidable chance the situation comes up, the suser should know about it. > > 17) Is there a test harness available? > > In the preamble, perhaps, but not the collection. > OK > > 18) Which information do I need to provide to any of the tests? > > 19) Any optional tests?, non-applicable tests? > > > >I realize that some of these items (such as 11) may not quite belong > >in this list, but it still an issue for the committee. > > Yes, and would belong in the preamble or prose associated with the suite as > a whole. Yup > Will comment on the rest later .....
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]
Powered by eList eXpress LLC