Monday, 4 February 2008

Book Review: Systematic Software Testing by Rick Craig & Stefan Jaskiel

image Anytime I approach a book now I try to get my initial prejudices and preconceptions sorted and out of my head to let me approach the book more clearly. My initial preconceptions of Systematic Software Testing have led to it sitting on my shelf for a long time. I've seen Rick lecture and he does that very well, a little overly metric focused compared to my general approach, but presumably that has worked for him and his clients in the past.
The title suggests a very formal heavily IEEE template and structure driven test methodology. But I also know that Rick has a military background and since that demands structure, heavy doses of pragmatism, different level of decision making, setting objectives and responding to the needs on the ground. So I expect that practicality to shine through. I wonder what I'll really find inside...
[amazon.com][amazon.co.uk]


We start the book by learning that the authors intended to write a contextual guide and upon reading what they had written discovered they had writing a set of guidelines which they encourage people to build from.
Chapter 1 provides us an introduction to ambiguity and the 'methodology' "Systematic Test and Evaluation Process" where the tester conducts their test thinking across a number of 'levels' - examples of levels provided include 'program' (unit), 'acceptance' etc.
Test Approach =
  • *Level
    • *Phase
      • Plan,
      • Acquire
        • Analyse,
        • Design,
        • Implement
          • *Task
      • Measure
The process reads: as soon as some input becomes available to work from start to plan at high level, work out what you have to do in more detail creating test designs as quickly as possible to highlight ambiguity and map these to their derivation source, do the testing and measure how well you did. Obviously I just wrote a very high level summary, but you can guess that this approach spools out a lot of cross-reference coverage information and documentation.
The authors promote the IEEE standard document templates as a basis for the test plans. Most readers will use the outlines provided in the book, rather than buying the expensive 'real' thing. STEP also provides a description of the roles of involved in testing.
The Risk Analysis chapter promotes the categorisation of "Software Risk Analysis" and "Planning Risk Analysis". The process listed results in a very structured approach to weighting and evaluating the risk associated with Features. I suspect that testers reading it may end up missing some risks as the description concentrates on features so the tester will likely miss architectural risks related to the interaction of components, or environment risks, but the description here focuses more on managing, evaluating and weighting the risks, than on identifying the risks.
Artech house book page graciously hosts chapter 3 so you can view it for yourself.
Some  advice in the chapter that I give to testers myself: consider the audience, highlight areas of the plan that you have uncertainties about in the plan itself.
The chapter focuses a lot on the 'test plan' as a 'thing' rather than the process of test planning so I can not recommend this book in isolation. Read it in conjunction with a book that describes the test planning process like Testing Computer Software or Patton's Software Testing. Also read James Bach’s Test Plan Building Process and Test Plan Evaluation Model. Since many testers early in their career don't know how to communicate the results of their test planning, this chapter will serve them better than reading some notes on the IEEE template in isolation.
Detailed Test Planning contains a discussion on Acceptance testing, how and when to involve users, that should prove useful to testers early in their career or those approaching acceptance testing for the first time. The chapter provides an overview of integration testing, and system testing but it feels very high level.
The unit testing discussion could probably shrink and have more effect, as I think the general advice of "Developers should be responsible for creating and documenting unit testing processes" could probably stand alone as effective advice for the tester when the tester does not develop themselves.
The unit testing section here could do with a little updating in light of all the writing currently available for Test Driven Development - admittedly the reader is pointed at Kent Beck's White XP book "Extreme Programming Explained: embrace change" (although the title listed in the text of Systematic Software Testing (at least my copy) misses out the 'explained' part).
The Analysis and Design chapter starts with a useful discussion of how to turn the documents provided on a development project into an coverage outline, or inventory (to use the book's terminology rather than mine). Then goes on the expand it into a test coverage matrix, or Inventory Tracking Matrix (to use the book's terminology). This approach can result in a lot of time spent doing the documentation rather than the testing but if your organisation views that as important then the discussion here may very well help you. I suggest that where possible (if you do this), you use a test tool to help you maintain these links to avoid repeating work. High level overviews of Equivalence Partitioning and other techniques then follow.
Some of the advice presented in this chapter "It's a good idea to add the test cases created during ad-hoc testing to your repository" reads too absolute - perhaps some ad-hoc tests should end up in the repository, but you should consider why? What did it cover that you hadn't covered? Did all data go down the same path you covered before? Perhaps you should automate it in a data driven test. Did it find a bug? etc. So I had some problems with this chapter and I recommend you treat it as a very high level overview and read a Software Testing Techniques book instead [Copeland][Beizer][Binder].
The Test implementation chapter focuses on the environment issues related to testing and this section does provide useful advice to the tester describing the importance of getting the right people involved and some useful information on the data, but again this section provides an overview than a detailed analysis. The book then moves on to test automation so the chapter tries to cover a lot of ground.
The metrics section came in shorter than I expected it to and provides various defect based metrics and coverage metrics. I hoped to find a better discussion of metrics here but again it felt fairly superficial and unfortunately the metrics used didn't seem to justify all the 'other' work that test tester did. Filling in all those forms and documents and writing the tests cases didn't seem to contribute to the overall test effectiveness metrics - I did expect to see that 'formalism' contributing to the metrics to help justify it. You can find defects without all that formalism, and you can track coverage achieved without all that formalism. So I found myself a little dissatisfied with this, but the chapter presents the typical metrics reported on many projects so you can see the formulas and make up your own mind if they actually measure 'test effectiveness'.
The short, and yet useful, Test Organisation chapter discusses different ways to approach the organisation of test teams.
Chapter 9 - 'The software tester' discusses interviewing techniques and then, unconvincingly for me, promotes certification.
The Chapter on 'The Test Manager' covers leadership of a test team in a very practical way.
I expected Test Process Improvement presented exclusively using CMM, TPI, TMM and ISO 9000, but fortunately the chapter starts with a general model of improvement before moving on to cover those topics. I hope the reader focuses their improvement effort on the first half of the chapter rather the 2nd half, and instead uses the 2nd half as a general overview of possibilities of 'what could we do'.
For me, this book works best when explaining how you can approach the documentation of testing in a very formal structured testing environment. The actual practice of testing gets a very overview treatment and the reader will need to look elsewhere to get that information. I would not recommend this as an "if you only read one testing book" book as I don't feel it gives you enough information to fully explore the contextual situation that you will find yourself in. Had I had this book as a junior tester, in very formal environments and documenting and tracking my testing in the way that this book describes I would have managed to shortcut my learning.
So it isn't a book for everyone. But if you don't work in the type of 'formal' environment described above then you may find this a useful book to see how the 'other half live'.
[amazon.com][amazon.co.uk]

No comments:

Post a Comment