[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]
Subject: Suitability for automation (Was: a few questions to kick-off the discussion)
Dave Pawson writes: >Not all wanted testing is automated. Manual intervention is often >necessary in order to fully test an object, be this physical or a >software application. The only real caveat should perhaps address >whether objective assessment is the boundary, i.e. subjective >assessment is in/out of scope. I should elaborate a little on what I said before. We should scope this to apply to any test regime where the **management** of test cases is amenable to automation, even if some or all of the test cases require a manual step. The XSLT test suite I worked on is a good example: 99.9% of the test cases can be set up, be run, have their results evaluated, and be cleaned up after by an automated test harness. A rare few need manual actions for one or more phases, but even those cases are managed in the same framework. By proper filtering of the test case metadata, we can produce the agenda for the manual tester to follow. Test cases in the "inspect" scenario can be set up, be run, and be cleaned up after by automation, but need manual intervention to evaluate their results, which I think gets to your point about "subjective assessment." Test cases in the "manual" scenario need manual intervention end-to-end. The test assertions might make a claim about the results that can only be evaluated manually with current technology, e.g., a drawn line "connects" point A to point B. Improvements in technology might allow software to discern the "connects" relationship in the future, but the test assertion could remain unchanged. As Jacques wrote, this makes assumptions about the architecture of the test environment. I think some assumptions about architecture are justifiable, because writing a set of TAs could be arduous, and the arduous work will only pay off if the TAs enable a significant advance in applying tests. The advancements could be: faster application of tests application of a more complete set of tests more frequent testing (i.e., after every change event) or some combination thereof. Test automation supports these goals, too, which is why I tied the TAs to test automation. .................David Marston
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [List Home]