OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

oiic-formation-discuss message

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]


Subject: Re: [oiic-formation-discuss] TC formation proposal.


On Tue, Jul 22, 2008 at 10:55 PM, Dave Pawson <dave.pawson@gmail.com> wrote:
> 2008/7/22 Peter Dolding <oiaohm@gmail.com>:
>> The added 12 deals with my major issue.  Back talk is required.  I
>> would change the "A report"  bit.  We really need to up stream them as
>> soon as we have them proven to be processed by the TC at next meeting
>> or main TC mailing list in case they are really quick to resolve.
>> Reports instead of a report.  With formal meeting reports ordered in
>> most critical to least.
>
> Don't disagree. What I don't want to do is tell the TC how to communicate
> with the main TC.
>
> Another option would be to require some sort of regular liaison between
> the two committees, "to resolve conformance and compliance issues"
>
> Is that too vauge? Any suggested improvements please?
>
> None of which would resolve the issues the TC were dealing with
> until the next release of the main standard.
>
Some cases could be resolved before next main standard.  Like if we
find 2 or more parties using extentions that we don't have a clue what
they are upto yet turn out to be the same thing just giving different
names.   Alignment between them might happen just because the spot
light is put on them.

Other cases could be posting out clearer documentation to implementer.

I am coming to the nasty felling we are going to have to ask the main
TC what we can ask for them to deal with ASP.  And what we have to
wait to next standard release to fix.   Ie where they are prepaired to
sit down with implementers and sort it out.   So this is going to get
a really nasty clause.

>
>
>>> The charter says the TC will not produce software.
>>> As a test implementer, any TC member is quite free to use
>>> the output of the TC to write software?
>>> Just that software will not be a main deliverable of the TC!
>>> The TC needs to produce good test specifications so that
>>> a test writer can use them.
>>
>> That is the point that is not clear.  Current wording any TC member
>> could risk being picked on over creating automated or other forms of
>> test suits.
>
>
> Not as I see it Peter. You are part of this group, we have no right
> to shout at you for what you do in the evening?
> I don't see that as an issue.

I don't want to see good coders locked out because we did the rules
wrong.  If a member of the TC wants to provide a test case they must
not be blocked by the rules.  There test case could be really handy to
us.
>
>>
>> Test implementers bit needs to be made clear as part of there job of
>> testing applications they might be coding automated tools.
>
> That is later, could be the TC will specify that such
> tools are needed. Not something for the charter, which only
> gets the TC started.
>
>
>>
>> Now down to implementers tools and end user testing(acid test).
>>
>> All implementers tools goal is a complete test of all parts of current
>> standard seeking to provide a detailed display of pass failure or
>> other wise.   Clear report output  with references to standard for
>> all failures.
>
> Looks like we have a difference there. My view on terms:
> A test tests some part of ODF.
> A tool helps me run that test (or a group of tests).
> Test results are produced by running a test or group of tests.
> The test identity should link it back to the part of ODF being tested.
>
> Do you have different definitions? Or was that just to separate
> these types of test from user tests (acid tests)
>
>
>>
>> end user testing(acid test) Normally something with percentage or a
>> graphic displayed to user with the means to access deeper to get the
>> developer information.  This is also for speed targeted as the worst
>> causes of incompatibly existing.  Less detailed coverage equals lot
>> shorter time to complete the test.  Due to acid tests being simple to
>> use we could ask everyone to like visit the TC site download the test
>> and post a report back.  As a form of random sampling.  Even doing it
>> as like a slashdot thing.  Where a full test suit does not work.
>
> Testing my understanding.
> 1. Run by an end user.
> 2. Must be fully automatic.
> 3. Targeted at known interop weaknesses (we don't have such a list yet).
> 4. Must run in less than .... ten minutes?
> 5. Presents a summary of passes and fails, possibly as a percentage.
> (No graphics please, or at least graphics plus an accessible alternative)
>
> I'm missing the purpose of the tests. Is it just interoperability?
>
Acid test more surveying of current day status with the most people
able.  To make sure what implementers and we do in controlled testing
is matching up to real world as well as showing users the existing
problems.  Nice graphics is kinda bait to get them to run it and
report back.   So it gets broad number of client machines with real
world combinations of software and so on are causing any breaches in
standard we were not aware of.   Could be something like user
installed X extention in OpenOffice and its breaking compatibility at
Y.  Implementer will be saying program works fine we might be too
because its passing tests.  Yet the broad real world test reports back
there is a issue hiding there to be found.

Where the implementers 1 is setup to test everything and be operated
by a person you don't have to bait into using it.   Ie price of bait
user they will not have tolerance of a test taking too long to run.
Also implementers will want to set up targeted test runs that acid
test users never will be allowed.

Yours and my defines are close.  I am more looking from a motive and
reason side.

> So the deliverable might be
>
> xx. A specification of a test suite testing interoperability areas
> (developed from *1) in which requires no user intervention,
> runs in a relatively short time and presents a results summary to the
> user. This class of test is sometimes known as
> acid testing.
>
> Does that get it?
>
That does get it close.  Usage and need of the test is not cover.  Ie
the surveying of most numbers of end users able to make sure what we
think is true about compatibility is so.
>
>
Opps on the collection I was reading threw the deliverables expect it
to be there.

>> Ok wording of 13 might need another rework.  We don't need any
>> developed test cases disappearing into the mail list or anywhere else
>> to be lost for ever.  Building the complete test system is going to be
>> enough work once.
> Your 13.* Implementer usable testing systems.(Complete Test of Standard)?
> If so, I *think* that is what I'm talking about with *3.
> The specification of a full set of tests for ODF. Do you think it sounds
> like something different? I'm not going to say who writes the test. That
> belongs after the TC IMHO.
>
Way I read *3 is you are talking about the quality tests themselves
not how they will be presented to the implementers and users to use.

We need to cover both quality rules of the tests both implementers and
acid could be defined in *3.  The two sub sections one for
implementers one for acid tests creating a document describing how
they will be created layout and other wise handled.   It is possible
for a implementer test and Acid test to over lap.  Hopefully in a way
1 giant test suit can be developed from all different parties to be
used to effectively test stuff.

Ok you might end up adding some sub points to *3 as well.

Peter Dolding


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]