Monday, 13 October 2008

Book Review: Next Generation Java Testing by Cedric Beust and HaniSuleiman

Subtitled "TestNG and Advanced Concepts" and written by the people behind TestNG, I picked up this book expecting to read a definitive and encyclopedic work on TestNG. However, the authors decry this view in the preface. This book takes 'testing' as its focus and uses TestNG to illustrate the examples. (Although it does really start off as  "a book about TestNG").

So prior to reading the book my experience of TestNG amounted to the following:


  • read some of the tests people had written using TestNG at work
  • amended some of the tests
  • hacked about with the testng.xml file
  • fixed some tests
  • gone to the website to learn a little more about some of the annotations and the xml file
  • Read, and used, the examples on the home page
  • Run tests and suites within Eclipse
  • Skimmed the documentation

So, as a beginner I felt like I could already 'use' TestNG, but I didn't really understand some of the concepts 'properly' like dataproviders - sure I could write one, but I didn't really 'get' all the nuances. Hence the reason for reading this book.

( amazon.co.uk | amazon.com )



And I had learned the hard way that TestNG does not view the method that you annotate with @Test as a 'test' . TestNG views a test as that thing in the testng.xml file with <test></test> around it.

So we start by learning a little about the deficiencies in JUnit that led to the creation of TestNG - to support different styles of testing and more 'functional' testing. Then a few examples of the annotations and the testng.xml file.

Then into unit testing with TestNG and testing exceptions.

I learned that TestNG creates a testng-failed.xml file containing a suite of just the failing tests. Which I hadn't noticed when skimming the docs online. But I can quite clearly see it in section 5.10 online now.

Then we jump straight into Factories. Whoa! Sub-optimal organisation for the beginner. I think the early parts of the book help argue the case for 'another unit test framework' and make you ask the question 'can your unit test framework do this?'

I say this because we have a test suite running quite well at work using just <parameters> some @Before and @After annotations and a few dataproviders. I read the Factory example in the text a few times and I confess I still didn't get it. Yep - you can call me stoopid, but I don't class this book as a beginner book.

So, still a bit bemused, factory only takes 4 pages, then we move on to the official section on Data-Driven Testing and this does seem written more with the beginnner in mind - why try and scare me off with a Factory example?

The data providers in TestNG really have a lot of power.

In some ways TestNG has too much power because you move from simple example to complex example within the same breath and having just figured out why you might use a data provider you have to wrap your head around how you might use the test context functionality.

OK, so not again, definitely not structured with the beginner in mind. And as a beginner you end up skipping chunks to get what you need. And when you finally try it in your own code it starts to make sense. But the authors aimed this book at the java programmer - typically a more technically proficient beast than the tester.

I won't try and summarise the book contents in this review as you can read the contents online. Really the book tries to cover lots of different 'typical' Java testing situations and show ways of using TestNG to test your code. Some of them really interesting - I particularly liked the conciseness of some of the Enterprise testing examples.

For testers - htmlunit and Selenium get cursory mentions and probably won't help you very much. Check out the Selenium-grid examples which use TestNG for more examples.

A really important book. But not quite what I hoped for. I wanted a more tutorial approach to learning TestNG and its capabilities. I think I could find most of the information I want in here, but I'll have to read and re-read this book a few times, spread over a few months to really understand the flexibility of TestNG. And I'll have to look at all the example code and figure out if/how I could use the technique to aid my System testing. So I'll have to stick with the basics at the moment and hope that the more I use it, the more I'll understand it - no easy routes into this tool it seems.

This book will probably work really well for the Java programmer and it has lots of really focused examples for common Java testing approaches.

For the tester wanting to use it for your functional testing then you'll get the basics here and understand grouping but you'll still have to struggle to learn the optimum ways of using the tool. The enormous flexibility of TestNG still feels just out of reach for me at the moment but the future seems too tempting to stop trying.

Related Reading:

Sunday, 28 September 2008

Book Review: Apache JMeter by Emily H. Halili

This book only has 120 or so pages and has the purpose of introducing the reader to JMeter. I haven't found the online documentation for JMeter an easy read - mainly because I could not find a nice easy to print or flip through pdf version. The online document serves a reference rather than a hand holding purpose.  Hence the need for this book.
amazon.co.uk | amazon.com


Sadly the first chapter does nothing to help the book - at only 7 pages it doesn't take up much space - but... The chapter covers - why to automate testing, how much to do, how to justify the cost - all the standard automation justification stuff - but given that this book does not have the title "Automated Testing with JMeter" I assume that if you pick up this book then you probably already know the stuff in this chapter. In fact the author recommends you skip this chapter if you already have some understanding of test automation - I recommend you skip it regardless.
In Chapter 2 I found JMeter sold to me as a more general testing tool than I expected - I hadn't revisited the JMeter feature list for a while so I did not expect to see such an attractive set of features, and I didn't know that JMeter has an API that I can embed in my Java apps and run tests as normal Java tests. (Yeah, call me ignorant). The text has plenty of nice screenshots which make the tool seem easy to use. And at the end of Chapter 2 I quickly sought to learn more.
As expected we start the meat of the book by installing and running JMeter.
Then in Chapter 4 we jump into learning JMeter's concept of a Test Plan:
  • Threads - the groups of users  and actions
  • Samplers - controllers that let you do stuff
  • Logic Controllers - for control flow
  • Listeners which let you report on stuff
  • Timers for pausing and adjusting flow
  • Assertions for validation
  • Configuration Elements for changing the way samplers work
  • Various pre-processor and post processor elements
We also see and example of putting a Test Plan together for a functional (rather than a performance) test of a web site. This seemed like an interesting example and I left Chapter 4 with a  good basic understanding of how to write and automated script in JMeter.
I found the list of samples listed surprisingly large and I could guess most of the purposes from the names, but I had to go away and do some research to find out what the JUnit Request sampler did.
Rather than a weakness, I found that a strength of this book. It piques your interest very quickly and you then go away and research that point of interest rather than getting bogged down in the whole JMeter documentation.
For the uninitiated the "Junit Request Sampler" seems to act as a mechanism for performing load tests using JUnit tests - someone must have found a use for this - it doesn't seem to support the annotation style of JUnit and you have to extend the tests Junit TestCase. Interesting to know ... possibly.
In Chapter 5 we jump into an intro to Load/Performance testing of websites and we learn how to use the http proxy recorder to record scripts and amend it into a test plan, add listeners and timers. And then run it and examine the results. we also get teaser hints into remote testing with JMeter with targetted links into the JMeter documentation.
With chapter 6 we go back to Functional testing which I assume the author included to provide an easy example for changing the HTTP Headers and user defined variables. I certainly hadn't expected to learn how to use JMeter for functional testing but I found it interesting to learn that people used it in this way. Although I still don't belive that I would fire up JMeter for functional tests. (Sample this chapter is on the Packt page for the book)
Chapter 7 provides a very quick overview of some "advanced features" but not "advanced usage". This really means some of the logic controllers, different ways of getting user defined data into the tests regular expressions, testing databases, testing ftp. Again this acts as more of a 'teaser' chapter to get you to go and research more on your own.
And with "JMeter and Beyond" (chapter 8) we get a short summary of JMeter and a list of its Samplers, Listeners, Assertions and Timers. And then the book ends all too quickly. In fact just at the point that I got really interested, the book stopped. So I really want another volume which leads me into the depths of the tool in more detail. Either that or I just have to wait till I need to use the tool in earnest and start reading the documentation.
Although this very short book only provides and introduction to JMeter. I would love all test tools to have a book like this. If I had to use a performance test tool tomorrow I would pick JMeter, purely because of this book - I wouldn't even think of trying OpenSTA or WebLoad, because then I'd have to scramble around for some documentation and introductory tutorials.
This book really helps "demystify" automated testing and particularly the 'serious and technial' JMeter, making it seem easy. Hopefully this kind of book will help testers who consider themselves 'non-techy' to start doing techy things.
And for those of you, like me, that do count as techy - this just makes the first few steps even easier and faster.
As a negative - the books seems a little expensive for a basic introduction but I can't think of an easier way of starting to use JMeter.
Related Links:

Wednesday, 18 June 2008

Clicking the buttons in QUnit functional testing with JQuery

I avoided using JQuery in my test pack for as long as I could, to try and learn a little about JavaScript the hard way. But I just could not get my button clicking test working cross browser. But clever JavaScript ninjas invented libraries like JQuery to help with exactly that type of problem so...


I wrote the previous posts in this series over on EvilTester:
  1. Test Driven JavaScript using QUnit
  2. Test Driven JavaScript Code Coverage using JSCoverage
  3. Functional testing JavaScript with QUnit - initial steps
In this post I shall use JQuery to 'click' on a button. So that I can 'functionally test' the button click side-effects. I have a button on my GUI which I want to click and make sure the side-effects of calling the function attached to that button display the results I expect on screen:
The following code only works in IE and generates a click event to trigger my button:
test("process eprime text by clicking on button",function(){
expect(3);
document.getElementById("inputtext").value=
              "Surely and \nIsn't this being" +
      " some bad text or rather Bob's WASn't he's being good";
        
var clickevent=document.createEvent('MouseEvents')
clickevent.initEvent('click', true, true)
document.getElementById("CheckForEPrimeButton").dispatchEvent(clickevent)
 
equals(document.getElementById("wordCount").innerHTML,
       15,"displayed wordCount = 15");
equals(document.getElementById("discouragedWordCount").innerHTML,
       4,"displayed discouragedWordCount = 4");
equals(document.getElementById("possibleViolationCount").innerHTML,
       2,"displayed possibleViolationCount = 2"); 
 
});      
Sadly it fails in FireFox, but when I introduce the JQuery
  
test("process eprime text by clicking on button using JQuery",function(){
expect(3);
document.getElementById("inputtext").value=
         "Surely and \nIsn't this being" +
 " some bad text or rather Bob's WASn't he's being good";
 
$("#CheckForEPrimeButton").click();
         
equals(document.getElementById("wordCount").innerHTML,
       15,"displayed wordCount = 15");
equals(document.getElementById("discouragedWordCount").innerHTML,
       4,"displayed discouragedWordCount = 4");
equals(document.getElementById("possibleViolationCount").innerHTML,
       2,"displayed possibleViolationCount = 2"); 
 
});


Voila, cross-browser functional testing and less code to boot.

Sunday, 8 June 2008

Book Review: Pragmatic Ajax - A Web 2.0 Primer by Gehtland, Galbraithand Almaer

The Ajax world moves really quickly, and has moved on a lot since the publication of this book, so much so that it could really do with a new edition. Fortunately, with the sub title "A Web 2.0 Primer", we should expect an overview, and in some ways it doesn't matter that we don't get the most up to date information.
[amazon.co.uk]
[amazon.com]


As a novice JavaScript programmer I did learn a lot from this book, but after having read it once, I doubt I'll really refer to it again.
The authors start by explaining a little about Ajax architecture. And an example of creating a very simplified Google Maps which I think I would have preferred as an appendix as it didn't really deliver an Ajax app, but I did learn a little about png that I didn't know.
"Chapter 3 - Ajax in Action" and we really start to learn some Ajax as we get to see the XMLHttpRequest in action. A quick overview of cross-browser compatibility opens our eyes to the fact that we should probably find a decent Ajax library and use that.
Chapter 4 provides an 'essentials' guide to JavaScript which I thought provided the basics of JavaScript very quickly and concisely. I learned a bit more about HTML events, and Dom manipulation - although I also learned that I really need to find a good library and use that. The book's age means that it does not cover JQuery.
The next few chapters then cover a few JavaScript frameworks: Dojo Toolkit, JSON-RPC, Prototype. Then a few client-server type frameworks DWR (Java), Ajax.Net, SAJAX. Then a few GUI libraries: Script.aculo.us, Backbase, SmartClient.
Unfortunately for the book, more frameworks exist now, than did at the time of publication, and the ones mentioned have advanced so you can read the coverage in this book as 'libraries do things like this' and then you have to go hunt the web and consult ajaxian.com to find out the new libraries that 'everyone' uses.
The chapter on Debugging helped but, again, the debuggers mentioned have all advanced and new ones have appeared e.g. GreaseMonkey techniques, FireBug. I did not know about Venkman, Mochikit, or View Source Chart, so I did learn something.
Sadly the book has no mention of Test Driven Development techniques or JavaScript unit testing e.g. JSUnit or QUnit.
Degradable Ajax then has a chapter to itself - how to make your apps work with or without JavaScript, and a short discussion of if you should even bother to degrade.
Then an examination of some server-side integration frameworks: JSON-RPC, SAJAX (PHP), Ruby on Rails, DWR (Java), Ajax.Net, and Atlas (.Net).
The book ends with a quick overview of the (then) new Canvas, and SVG.
A pretty quick book to read. Which you immediately want to supplement with some good web searches, but with sadly few follow on links in the book. The book's web page discussion area has very little information but downloading the book's code might prove interesting.
Just in case the above seems overly negative. At the same time as I read Pragmatic Ajax, I also read "Ajax: Your visual blueprint for creating rich internet applications", and I much preferred Pragmatic Ajax.

Related links:

Friday, 18 April 2008

Book Review: Working Effectively with Legacy Code by Michael C. Feathers

In the foreword Robert Martin tells us that other patterns exist for preventing bad code, and this this book helps us reverse the rot, to "...turn systems that gradually degrade into systems that gradually improve."
Since the provided definition of "Legacy code" describes "code without tests", you can apply the approaches presented at any point in a project where you discover that the code does not have tests. And depending on the level of 'rot' you can pick and choose from the various techniques presented.
[amazon.com][amazon.co.uk]


It goes far beyond other 'unit testing' books aiming to provide ways of allowing the coder to "confidently make changes to any code base" and certainly the book does a fantastic job of removing objections about "well that would be fine in theory, but not on my code base". A 3 word summary of the presented approach reads "Cover and Modify"; cover code with tests, and then modify it.
I have written this book review late, since I first read the book about 2 years ago and have re-read it again a year ago and since then and dipped into it as required. A book like this needs to have code, it needs to have a technical focus and this one does - that does not mean that suddenly becomes out of reach of the non-technical - the code gets shown in context using small snippets surrounded by explanation. The book does not present pages and pages of code the code acts as further explanation and the supporting box outs and diagrams all help. I found this book very well presented and sectioned.
The 'algorithm' for code change that Michael Feathers presents early in the book has 5 points:
  1. Identify change Points
  2. Find Test Points
  3. Break Dependencies
  4. Write Tests
  5. Make Changes and Refactor
The chapters in the book either provide background on the thinking or general approach behind these points or, in the later chapters, specific mechanisms e.g. for 'breaking dependencies'.
The 'seam' chapter - chapter 4 (freely available from InformIT - see link below)- describes one of the fundamental approaches that Michael uses; the identification of places "where you can alter behaviour in your program without editing in that place".
And every 'seam' "has an enabling point, a place where you can make the decision to use one behaviour or another"
The small chapter describing this should seem fairly natural to testers. If testers have the experience of thinking through environments, working out what to split out and mock at different levels or replace with alternatives, or how to inject a monitoring mechanism into an existing app. Seeing the same thought processes applied to code helped me understand mocking and TDD better when I first read this chapter.
I learned a name for an approach to testing that I had adopted before but hadn't identified as a special case - 'characterisation tests'. Tests which represent the behaviour of the system "as is". A concept that has served me well when doing 'exploratory testing' on applications I don't know to first 'learn' the application through 'characterisation testing' and then perform specific 'question' oriented testing after this.
I generally treat legacy regression tests as 'characterisation tests' rather than as 'real' tests, meaning that they may tell me something about an 'as is' state of the application at some point in time, but they probably don't 'test' the system in terms of asking any 'questions' about it. This provides me with a sense of doubt that I value.
But I digress. I get value from this book each time I read it. So it remains a computing book that I refer back to.
You can get a really good idea of the book's contents below. I won't attempt to summarise the book here because it does go into a lot of valuable detail and you need the book on your shelf. As simple as that really. Buy it.
 
Related Links:

Friday, 11 April 2008

Book Review: JUnit Recipes by J. B. Rainsberger, Scott Stirling

[amazon.com][amazon.co.uk]
' "Stop Debugging. Write a test instead" and here's how'. That seems to sum up the book. Wether you use TDD or not, JUnit Recipes helps you get more out of JUnit - perhaps it will help you stave off a move to TestNG?
Contents include 130+ 'solutions' for common tasks. If you check out the contents page then you can see what the authors cover.


People often want assurance from 'authorities' that they are doing the right thing so the book has discussions about 'how much' testing to do and 'how low' to go. 
So guidelines include:
  • "don't test it if it is too simple to break",
  • "don't test the platform"
  • "try the different techniques out and see which you prefer"
  • "test anything in which you do not already have confidence"
Hence guidelines rather than prescriptive practices. The book heavily encourages the reader to try it out and see how it works for you, and provides some alternative approaches.
You do have to work through some of the examples to understand them (or at least I did) and since it acts as a recipe book, you mainly consult it when you need to.
Unlike many cookbooks, Junit Recipes seems to move through in a logical order building on previous recipes so although it doesn't sell itself as a tutorial book you can work through it in that way.
At least you can probably read up to Chapter 6 sequentially, but thereafter the book worked best for me as a dip in and out when required recipe book (presumably the intended usage).
Many tidbits based on experience, of which I have only selected 4 that stood out for me on initial reading:
  • testing floating point values with tolerence levels
  • abstract test cases - http://c2.com/cgi/wiki?AbstractTestCases
  • have JUnit automatically build test suites "return new TestSuite(MyNewTest.class);" (and other ways of automatically building suites)
  • suite or higher level setups rather than just at testCase level
I didn't know JUnit supported so many ways to build test suites. I found the discussions in Chapter 3 and 4 on how to organise your tests very useful.
Chapter 4 contains a discussion of data driven tests, a technique common to 'system testing' but I don't see used very often at the unit level and looking at the code provided I can see why - I had to go through this several times to get a handle on it and would still want the book in front of me if I tried to do it live. Coverage of the use of test data then increases in Chapter 5.
At Chapter 14 you can start reading sequentially again. So for a first read I would recommend 1-6. 14 -> to the end.
If you use JUnit as your test tool and if you do any of the 'things' covered in the contents list of recipes then I recommend getting hold of this book.
Useful links that have examples of usage in the book:
Related Links:

Thursday, 27 March 2008

Book Review: The Craft of Software Testing by Brian Marick

image When the author, Brian Marick, describes his own book as "somewhat dated...written in a less spritely manner than I'd use today... this is not how I do things today", that doesn't really add up to a particularly motivating sales pitch for the book.
My copy has the appearance of "printed on demand" - which has resulted in a slightly wonky copy, but at least the book remains in print.
I think it that anyone reading the book will see how Brian ended up as one of the signatories of the Agile Manifesto. I found that I read the approach, that Brian outlines for Test Requirements as mirroring the approach I use when writing test ideas and analysing Agile stories. I believe that much of the text could find ready application to people working with Agile Stories - but I think the reader will get bogged down by the examples on first reading.
[amazon.com][amazon.co.uk]


As I read the book I found myself wishing that it had undergone a re-edit in the years since its publication. I found it very heavy on the examples, and the important advice that the example text contains struggled to stick out. The very few 'summary' sections makes it a hard book to skim or revisit quickly to remind yourself of the main points. So sadly the good advice remains buried in the text. I found all the asides, hints and tips and 'thinking aloud' the most useful content in the book, but these sections do not form the 'meat' of the book - the examples form the meat, and they seemed a slog to read.
I had a quick look around Brian's website to see if the important elements from the book had made it there and I found a few papers that mirror the book (all available at http://testing.com/writings.html):
Also read the "Testing For Programmers" course notes on the web page - visit the bottom of the  'writings' page for the links.
Unfortunately none of these papers deal with the important concepts in the book of "clues" and "testing requirements".
You can download Appendix B, the Generic Test Requirements Catalog, from Brian's web site but unfortunately I could not find the more useful Appendix F available for download. Appendix F summarised the basic elements of the book and some of the thoughts I found useful, but I will still have to go back and give the book a 2nd read to extract full value from the text.
Interestingly the 'tester' came number 2 on the list of targeted audience members, which certainly in 1995 made the book unusual. I suspect that the presence of code in the book will put many testers off. I found that all the code made the book unnecessarily 'hard' to pull out the really useful information, I think I would have preferred a 'box out' approach to the discussion text.
The approach of the book seemed well summarized on page 12:
"...I build the test requirements checklist alongside the design, and I constantly check the design against those requirements. When the design is finished, I build at least some of the test specifications. I expect that combining the test requirements and choosing specific inputs will job my imagination into noticing problems with the subsystem's design. After they are fixed, and the test requirements and specification are updated, I think in some detail about how the design would handle each test. That sometimes finds bugs; at the least, it's good preparation for writing the code. Then I implement the subsystem and the tests, trying always to write tests as soon as possible."
This quote highlights the notion of a "test requirement" which represents a 'test idea' for 'how to try and find a fault'. Brian describes "clues" as the derivation source for test requirements. A "clue" described as something that 'needs to be tested' which the tester picked up (from reading specifications, studying the application, general communication, etc.) .
A test requirements catalogue reads as an outline of 'test ideas'.
I think that this relates really well to the approach I adopt when working on Agile stories and identifying the acceptance criteria and tests required. I try and teach this approach to testers and analysts on the project. I don't really want to send them to this book just for that information, since I'd also have to create a reading plan to target them into the 'right' bits of information, but I haven't seen a simple write up of 'test requirements' on Brian's web site to point them at instead.
A few hints and tips that I pulled out of the text when reading the book:
  • gain an awareness of the types of problems that your test approach will not find
  • Review path coverage after the test analysis, rather than driving test analysis from path coverage
  • Create tests by predicting faults - using general rules abstracted from common errors
  • Errors with test requirements: overly general, too small (i.e. missing requirements)
  • Use missing code coverage as a pointer to missing clues
You can extract value from reading the worked examples but you may have to slog through them a little - they do have little 'asides' about the thought processes in use, and the final write up showing the requirements catalogue against the specification and thought process provides useful examples of how to build up 'test ideas' quickly.
I find myself tempted to not recommend the book because it does feel like a slog to read, but since I want it on my bookshelf, and I want to read it in more detail and actually slog through the examples in more detail because I think I missed some nuggets of useful information. I think that all means, despite its faults, I end up recommending it with caveats...
  • If you work on Agile projects and you are are prepared to slog through the examples then I think you'll find value
  • For most general testers, or testers on waterfall projects, or testers without an interest in code and working through the examples in a book - I recommend that you steer clear, this book will probably not work for you.
The examples do feel dated and I would not call the text 'spritely', and you probably wouldn't do testing like this, but if you can mine the text then you will find value here.
Related Links
image
amazon.com
amazon.co.uk
Brian Marick's summary review

Monday, 18 February 2008

Book Review: Software Testing Fundamentals by Marnie L. Hutcheson

image Driven to provide ways of providing better information to her customers, Marnie Hutcheson has identified techniques for identifying and structuring her test scope to allow her to provide estimates, negotiate and agree a prioritised scope, and report progress against that. All of which sounds like the makings of a great book.
[amazon.com][amazon.co.uk]



But I have to say that it ended up as a strange little book. Unfortunately a lot of the book read like padding so I ended up skipping useful information and backtracking, and I did get confused by the book at times.
I think you can safely skip chapter 1 and just read the summary at the end of the chapter.
If you skip Chapter 2 you will miss some useful information so I suggest skipping to the middle of chapter 2 where Marnie discusses Quality as "getting the right balance between timeliness, price, features, reliability, and support to achieve customer satisfaction". While she relates this to the product under test, I think you can relate it to the test process itself and if you read the rest of the chapter in this light it becomes quite interesting.
The section in Chapter 2 relating to "picking the correct quality control tools for your environment" provides encouragement and advice on:
  1. automate your record keeping,
  2. improve your documentation techniques
  3. use pictures to describe systems and processes
  4. choose appropriate methods and metrics that help you and/or the client
Chapter 3 starts slowly but explains some useful rules:
  1. state the methods you will follow, and why
  2. state assumptions
then goes on to examine some methods of organising test teams. But I think you can probably skip the chapter and just read the summary.
The book starts to add value in Chapter 4 where it discusses the "Most Important Tests (MITs) Method".
MITs, as I understood Marnie's explanation of it:
  1. Build a test 'inventory' of all the stuff you know: assumptions, features, requirements, specs, etc.
  2. Expand the inventory into 'test's.
  3. Prioritise the inventory and related tests
  4. Estimate effort
  5. Cost the effort and negotiate the budge - as this dictates the scope of the inventory you can over
  6. Define the scope - an ongoing activity
  7. Measure progress
  8. Reflect on what happened to allow you to improve
I've paraphrase it above as Marnie does not use those exact words and the italic words are my summary keywords of the approach.
An exploration then follows of the MITs method in a plan driven, and in an Agile environment. The Agile environment does not match the Agile environments I have worked in so I found it difficult to relate exactly to what I do. Despite the useful thoughts presented here, I would have concerns if any tester in my Agile environment explained what they do in terms of the actual presentation in this book. I would have fewer concerns if they explained it in the 'spirit' of this book, or the generalised approach - perhaps using MITs lite.
The metrics chapter examines: time, cost, bug# (per attribute: component, severity, found date, etc.), some test coverage metrics, a Defect Detection Percentage  etc. If you get stuck for metrics then you'll find some in here that might work for you.
Chapter 6, 7 and 8 discuss the test inventory in more detail.
  • How to construct one through analysis of requirements, interviews, usage, system structure, data, inspiration.
  • What they can look look like, as spreadsheets, powerpoint, documents, tables
I found generally useful approach and experience documented here.
The 2 chapters on risk result in a heavily analysed inventory to identify scope and priority. I think  you should view this as a fairly typical presentation of risk and priority. The depth of coverage does highlight the importance that the MIT method places on analysis, agreement of importance and negotiation of contract, and I think you will gain some value from reading.
Two chapters cover structural path analysis. They cover one of my favourite techniques of drawing the diagrams to explain my understanding, and briefly mention using them in a dynamic way to build up a model as you 'do' something. Unfortunately, my main takeaway point was the use of path analysis as an estimation tool.
Data analysis - through paths, on boundaries, combinations - receives a quick overview and has some experience embedded within it.
So I come to the end of my reading and I found this a difficult book to read. I did not find its structure conducive to my understanding. Some of the topics that I use a lot - path analysis, data analysis - didn't lead me to believe that at the end of reading it that testers would use those techniques effectively.
I do recommend the basic principles of scoping and negotiation and if you haven't done that type of work before, and can get into the same rhythm as the book then it can probably help you in those tasks.
The basic notion of inventories and outlines seems perfectly sensible to me, but as a whole the method seemed too heavy.
I have used approaches similar to those listed in the book because I thought they were necessary for the project. But in hindsight I think they were necessary for me, at that time in my development as a tester, on those projects.
I think Marnie knows when to use her methods deeply and when to use a lite version, and how to tailor it. But I don't think her full experiences of using the approach really get communicated to the reader to allow them to do that.
I think that this book aims at the right audience of Beginner/intermediate tester. But I don't think the book communicates its underlying principles as well as it could. I think you will need to work the book to dig them out. But if you haven't used some of the techniques I've listed in this review then I think you will gain experience by reading the book and trying them.
[amazon.com][amazon.co.uk]
Related Links

Sunday, 17 February 2008

5 acronyms that software testers should learn from

I count Google video as one of, if not the, best self-training resources currently available to me. So on Google video here are 5 acronyms that you can use for your self education as a tester: AAFTT, BBST, GTAC, SHMOOCON, OWASP.



  1. AAFTT Agile Alliance Functional Testing Tools visioning workshop

  2. BBST Black Box Software Testing Course

  3. GTAC Google Test Automation Conference

  4. SHMOOCON The Shmoo Group's Software Hacking Conference

  5. OWASP Open Web Application Security Project (OWASP)

Monday, 4 February 2008

Book Review: Testing Computer Software by Kaner, Falk, Nguyen

imageI thought I'd read this again for review purposes. I didn't expect it to surprise me, but it did, massively.
One of the most realistic testing books available, starting almost immediately in the preface discussion "Its not done by the book". The book sets out its target audience as simply "the person doing the testing"
[amazon.com][amazon.co.uk]


"...find and flag problems in a product, in the service of improving its quality. your reports of unreliability in the human-computer system are appropriate and important...You are one of few who will examine the full product in detail before it is shipped."
This 'definition' works better for me than Ron Patton's definition, but Ron's book reads more gently and easily. 'Testing Computer Software' contains a lot of very direct opinions from the authors which you will see presented as authoritative is'ms (this is X) which may distance the reader if the reader currently adopts a very different mindset - which I think happened to me on first reading. So if it happens to you, don't switch off, don't skim. Analyse your response. Read this book in a better way than I did.
Ron's book only targets beginners whereas Testing Computer Software works for both beginners and more experienced testers - if the 'more experienced' mind doesn't rebel too quickly.
I don't think I read this book properly the last time I read it. Certainly I wasn't doing explicit exploratory testing at the time and I think I dismissed the text as a little too ad-hoc. But just a few pages in I can now see that the book outlines some lessons that I then had to learn through experience e.g. "Always write down what you do and what happens when you run exploratory tests." Sigh, if only I had read the book properly first time round. 
Chapter 1 starts with an overview of 'exploratory' testing and a possible strategy that an experienced tester might adopt. A 'show' don't 'tell' approach to explaining software testing.
1st cycle of testing
  1. Start with an obvious and simple test
  2. Make some notes about what else needs testing
  3. Check the valid cases and see what happens
  4. Do some testing "on the fly"
  5. Summarize what you know about the program and its problems
2nd cycle of testing
  1. Review responses to problem reports and see what needs doing and what doesn't
  2. Review comments on problems that won't be fixed. They may suggest further tests.
  3. Use your notes from last time, add your new notes to them, and start testing
I found some notes that I made when I read the book first time through lodged inside the cover. At the time I first read the book I took umbrage at the notion that "the best tester is the one who gets the most bugs fixed." I now read that as "the best tester finds the bugs that matter most". But I still find myself reticent about using the phrase "the best tester is" as that suggests an 'always' to me and I really can't say that that statement would 'always' apply.
Chapter 2 sets out the various ground rule axioms so the reader doesn't have to learn them the hard way e.g. "you can't test a program completely" "you can't test it works correctly" etc.
Chapter 3 seems like a general reference section on test types but even here we find good old fashioned, hard won experience, box outs which challenge your thinking.
Chapter 5 - reporting and analysing bugs works well on repeated reading and everyone involved in testing would benefit from re-reading it occasionally.
Problem tracking (chapter 6) pulls no punches in its description of the 'real' world that I have encountered and you may well encounter on some projects;
  • "Don't expect any programmer or project manager to report any bugs"
  • "Plan to spend days arguing whether reports point to true bugs or just to design errors."
  • ...
Fortunately the chapter contains a lot of advice as well:
  • Each problem report results from an exercise in judgement where you reached the conclusion that a "change is worth considering"
  • Hints on dealing with 'similar' or 'duplicate' reports (and how to tell them apart)
The Test Design chapter (7) speeds through a whole set of useful 'stuff' and again has plenty of experience behind it to learn from.
Most people will not test printers so chapter 8 presents the opportunity for the reader to deconstruct it and learn some generalised 'lessons' otherwise the obvious temptation results in skipping it and learning nothing.
Skipping across to Chapter 12 I see that I learned the very important lesson that the test plan can act as a tool as well as a product from this book, and that for me was worth the initial time with the book as it clarified a lot of thoughts in my head and helped me approach the particular project I worked on at the time in a different way; incrementally building up my thoughts on the testing, making my concerns and knowledge gaps visible.
I did not find this an easy book to read. Even on a second reading.  I frequently felt mentally bludgeoned by authoritative sentence phrasing. Which for a book that embraces exploratory testing and contextual thinking I find a strange dichotomy.
But don't let this stop you reading the book. This book deserves its best-selling status, and still deserves to sell in vast quantities. The writers have crammed so much practical advice in here that I heartily recommend it.
I can see that my thinking has changed since I last read the book. Which sadly suggests to me that the book wasn't a 'persuasive' argument for this 'experienced' tester at the time when I really needed it to help me. So please, gentle reader, if you consider yourself an 'experienced' tester try and read it with a clear mind.
If you consider yourself a beginner then you will probably get a lot out of the book immediately - Chapter One alone pays for the price of admission.
[amazon.com][amazon.co.uk]

Book Review: Systematic Software Testing by Rick Craig & Stefan Jaskiel

image Anytime I approach a book now I try to get my initial prejudices and preconceptions sorted and out of my head to let me approach the book more clearly. My initial preconceptions of Systematic Software Testing have led to it sitting on my shelf for a long time. I've seen Rick lecture and he does that very well, a little overly metric focused compared to my general approach, but presumably that has worked for him and his clients in the past.
The title suggests a very formal heavily IEEE template and structure driven test methodology. But I also know that Rick has a military background and since that demands structure, heavy doses of pragmatism, different level of decision making, setting objectives and responding to the needs on the ground. So I expect that practicality to shine through. I wonder what I'll really find inside...
[amazon.com][amazon.co.uk]


We start the book by learning that the authors intended to write a contextual guide and upon reading what they had written discovered they had writing a set of guidelines which they encourage people to build from.
Chapter 1 provides us an introduction to ambiguity and the 'methodology' "Systematic Test and Evaluation Process" where the tester conducts their test thinking across a number of 'levels' - examples of levels provided include 'program' (unit), 'acceptance' etc.
Test Approach =
  • *Level
    • *Phase
      • Plan,
      • Acquire
        • Analyse,
        • Design,
        • Implement
          • *Task
      • Measure
The process reads: as soon as some input becomes available to work from start to plan at high level, work out what you have to do in more detail creating test designs as quickly as possible to highlight ambiguity and map these to their derivation source, do the testing and measure how well you did. Obviously I just wrote a very high level summary, but you can guess that this approach spools out a lot of cross-reference coverage information and documentation.
The authors promote the IEEE standard document templates as a basis for the test plans. Most readers will use the outlines provided in the book, rather than buying the expensive 'real' thing. STEP also provides a description of the roles of involved in testing.
The Risk Analysis chapter promotes the categorisation of "Software Risk Analysis" and "Planning Risk Analysis". The process listed results in a very structured approach to weighting and evaluating the risk associated with Features. I suspect that testers reading it may end up missing some risks as the description concentrates on features so the tester will likely miss architectural risks related to the interaction of components, or environment risks, but the description here focuses more on managing, evaluating and weighting the risks, than on identifying the risks.
Artech house book page graciously hosts chapter 3 so you can view it for yourself.
Some  advice in the chapter that I give to testers myself: consider the audience, highlight areas of the plan that you have uncertainties about in the plan itself.
The chapter focuses a lot on the 'test plan' as a 'thing' rather than the process of test planning so I can not recommend this book in isolation. Read it in conjunction with a book that describes the test planning process like Testing Computer Software or Patton's Software Testing. Also read James Bach’s Test Plan Building Process and Test Plan Evaluation Model. Since many testers early in their career don't know how to communicate the results of their test planning, this chapter will serve them better than reading some notes on the IEEE template in isolation.
Detailed Test Planning contains a discussion on Acceptance testing, how and when to involve users, that should prove useful to testers early in their career or those approaching acceptance testing for the first time. The chapter provides an overview of integration testing, and system testing but it feels very high level.
The unit testing discussion could probably shrink and have more effect, as I think the general advice of "Developers should be responsible for creating and documenting unit testing processes" could probably stand alone as effective advice for the tester when the tester does not develop themselves.
The unit testing section here could do with a little updating in light of all the writing currently available for Test Driven Development - admittedly the reader is pointed at Kent Beck's White XP book "Extreme Programming Explained: embrace change" (although the title listed in the text of Systematic Software Testing (at least my copy) misses out the 'explained' part).
The Analysis and Design chapter starts with a useful discussion of how to turn the documents provided on a development project into an coverage outline, or inventory (to use the book's terminology rather than mine). Then goes on the expand it into a test coverage matrix, or Inventory Tracking Matrix (to use the book's terminology). This approach can result in a lot of time spent doing the documentation rather than the testing but if your organisation views that as important then the discussion here may very well help you. I suggest that where possible (if you do this), you use a test tool to help you maintain these links to avoid repeating work. High level overviews of Equivalence Partitioning and other techniques then follow.
Some of the advice presented in this chapter "It's a good idea to add the test cases created during ad-hoc testing to your repository" reads too absolute - perhaps some ad-hoc tests should end up in the repository, but you should consider why? What did it cover that you hadn't covered? Did all data go down the same path you covered before? Perhaps you should automate it in a data driven test. Did it find a bug? etc. So I had some problems with this chapter and I recommend you treat it as a very high level overview and read a Software Testing Techniques book instead [Copeland][Beizer][Binder].
The Test implementation chapter focuses on the environment issues related to testing and this section does provide useful advice to the tester describing the importance of getting the right people involved and some useful information on the data, but again this section provides an overview than a detailed analysis. The book then moves on to test automation so the chapter tries to cover a lot of ground.
The metrics section came in shorter than I expected it to and provides various defect based metrics and coverage metrics. I hoped to find a better discussion of metrics here but again it felt fairly superficial and unfortunately the metrics used didn't seem to justify all the 'other' work that test tester did. Filling in all those forms and documents and writing the tests cases didn't seem to contribute to the overall test effectiveness metrics - I did expect to see that 'formalism' contributing to the metrics to help justify it. You can find defects without all that formalism, and you can track coverage achieved without all that formalism. So I found myself a little dissatisfied with this, but the chapter presents the typical metrics reported on many projects so you can see the formulas and make up your own mind if they actually measure 'test effectiveness'.
The short, and yet useful, Test Organisation chapter discusses different ways to approach the organisation of test teams.
Chapter 9 - 'The software tester' discusses interviewing techniques and then, unconvincingly for me, promotes certification.
The Chapter on 'The Test Manager' covers leadership of a test team in a very practical way.
I expected Test Process Improvement presented exclusively using CMM, TPI, TMM and ISO 9000, but fortunately the chapter starts with a general model of improvement before moving on to cover those topics. I hope the reader focuses their improvement effort on the first half of the chapter rather the 2nd half, and instead uses the 2nd half as a general overview of possibilities of 'what could we do'.
For me, this book works best when explaining how you can approach the documentation of testing in a very formal structured testing environment. The actual practice of testing gets a very overview treatment and the reader will need to look elsewhere to get that information. I would not recommend this as an "if you only read one testing book" book as I don't feel it gives you enough information to fully explore the contextual situation that you will find yourself in. Had I had this book as a junior tester, in very formal environments and documenting and tracking my testing in the way that this book describes I would have managed to shortcut my learning.
So it isn't a book for everyone. But if you don't work in the type of 'formal' environment described above then you may find this a useful book to see how the 'other half live'.
[amazon.com][amazon.co.uk]

Monday, 28 January 2008

Book Review: Software Testing by Ron Patton

image This review actually covers the 1st edition, and not the current 2nd edition.
I read this a long time ago - made my notes and have subsequently lost them. So I start again.
My basic memory from the last time I read it recalled as: "a good book for beginners". So I'll see what a second reading does for me.
[amazon.com][amazon.co.uk]


The first section of the book gives a basic introduction to software testing based on pragmatism.
While part 1 does have a section on the "Realities of Software Testing", the section "What exactly does a software tester do?" gives out what I consider unwise advice in form of:
"The goal of a software tester is to find bugs, find them as early as possible, and make sure they get fixed."
I would not state a definition like this to beginner testers as I know from experience what kind of behaviours this will drive into overly zealous young testers. Certainly our goals include the finding of bugs, yes preferably as early as possible, but "making sure they get fixed"?. When a junior tester takes 'making sure they get fixed' rather than a 'bug advocacy' or 'information providing' role, they end up acting as 'gatekeepers of quality'. 
So I hope the beginner tester will not take this definition fully to heart. After making this statement Ron then goes on to temper it slightly with one of his tester attributes stating '(mellowed) perfectionists' (not quite the 'make sure they get fixed' attitude) and then we learn that "Not all bugs you find will be fixed". But a slight tempering does not fully mitigate the repeating of this phrase throughout the book.
I find it entertaining, and yet saddening,  that one of my biggest problems with the book stems from the 6 words "...but making sure they get fixed", but then testing does attract pedantic monsters (not one of Ron Patton's attributes of a good tester).
And so...moving on... I do hope the beginner testers learn from the 'Realities of Software testing section' as Ron has built this up from experience.
Testing Fundamentals provides useful approaches to testing from, and without, specifications. Including:
  • Perform high level reviews
  • Build your own 'spec' which tells people what you plan to test
  • Use an ambiguity checklist
  • Don't forget to test error messages
The fairly short dynamic black box section packs a lot into its 26 pages. For example, the Data boundary discussion read as a more pragmatic and thorough discussion than normally outlined by beginning tester books and provides a small set of classes to look for for other boundaries: Slowest/fastest, shortest/longest, empty/full, etc.
I will have no hesitation in recommending this section to a beginner tester as it provides a broad coverage very quickly and I wish I had had it available to me when I started out.
I found part III - applying your testing skills the weakest section. Hopefully testers can generalise from the section and apply some of the lists as heuristics. The configuration chapter goes into more detail than most testers will use, but I hope testers can generalise from it and apply configuration lessons presented here to more than just the hardware. The Usability chapter will hopefully encourage testers to keep their eyes open as they test to items beyond the specification and requirements. I don't think that the overly basic web testing chapter will encourage testers to go off and read something like "How to break web software" [google vid] [amazon.com][amazon.co.uk]
Part 4 consists of two chapters, one on automated testing and the other on beta testing. The automated testing chapter hints at tools that I don't see many testers using (monitoring software) so I gave positive marks on seeing that mentioned and Ron tempts the reader with a tool called "Koko the smart monkey" but a little unfairly as the tool doesn't appear downloadable or available from anywhere online that I could find, so while the chapter offers possibilities it provides a very basic overview of automation. Similarly the beta testing chapter, I found too sparse.
Chapter 16 starts with the important distinction between 'planning your testing' and 'writing your test plan'.
"The ultimate goal of the test planning process is communicating (not recording) the software test team's intent, its expectations, and its understanding of the testing that's to be performed... Of course, the result of the test planning process will be a document of some sort."
This chapter combined with James Bach's Test Plan Building Process and Test Plan Evaluation Model should put a junior tester in a good position to construct a good test plan that communicates their intents and concerns.
The Test Case Chapter (17) provides the splendid advice of
"[test case] level [of detail] is determined by your industry, your company, your project, and your team."
In other words, your context.
"It's unlikely that you'll need to document your test cases down to the greatest level of detail..."
I consider this good advice for testers to take, that you treat the level of detail required for your documentation as a negotiation with the project to meet its needs.
Chapter 21 embraces the unfortunate decision to embed a lot of information and web links in the "Career as a Software Tester" when an up to date online web page with this information would support the reader better but the text book (informIT) and author does not seem to have a major up to date web presence.
So...I come to the end of the book and have mixed feelings.I like part 1, 2, and 5, but was less keen on parts 3, 4 and 6. fortunately I consider parts 1,2 and 5 as the most important parts. I did not see enough in the book that would irreparably damage a beginner tester so I can recommend Software Testing by Ron Patton to the beginning tester - mainly because of its down to earth tone and hints & tips approach.
Test the book yourself by reading excerpts on the informIT site
[amazon.com][amazon.co.uk]

Monday, 21 January 2008

Agile Acceptance Testing

Some notes and links on Agile Acceptance Testing. First the links.
Read the online Acceptance Testing chapter from "Test Driven - Practical TDD and Acceptance TDD for Java Developers" by Lasse Koskela.
Read Brian Marick's article Series Agile Testing Directions.
And now the notes:


I haven't yet read "Test Driven - Practical TDD and Acceptance TDD for Java Developers" [amazon.com][amazon.co.uk]. I have, however, read the free online chapters and I have already sent the urls of those chapters to people that I work with to introduce them to TDD and Acceptance Testing principles. I will eventually buy the book to study. (I don't want to overload myself too much by reading this when I haven't yet finished working my way through Agile Java [amazon.com][amazon.co.uk]).
This chapter presents a very readable and detailed guide to the practice of acceptance testing on an Agile project and where and how to do it within the iteration.
Brian Marick wrote an article series called Agile Testing Directions where  he refers to 'acceptance tests' as 'checked examples'.
To me, the phrase a 'checked example' implies that acceptance tests represent a set of agreed 'good enough', 'broad enough', base set of tests that we can reuse. We need to do further investigation to determine our confidence in their 'checkedness' and 'comprehensiveness'. So we supplement these checked examples with additional user/exploratory testing/functional automation/unit tests etc. I think that Scott Ambler alludes to this in this Dr Dobbs article.
Acceptance Tests represent examples from the customer so we want the customer to understand the test and so we have to write the test in a way that supports us communicating them to the customer. Preferably documented in a way that the customer can read and understand the test without help, and in their most idealised form - written using a tool that the customer can amend/create the tests.
But ignore any fancy abstraction approaches until you get a set of tests that the customer can understand. This allows the customer to agree that the tests implement their basic coverage acceptance examples. Over time you can amend the tests to write them in a more customer friendly way, as you learn more about the domain and the customer's preferred representation style.
Then comes the fun part. In addition to examples, identify any risks. Identify things that you think you might have concerns about but don't yet see the value in formalising into an automated acceptance test.  Think through the 'other' questions that you have about the stories because they can help you build your starting charters for your exploration of the implementation.

Saturday, 19 January 2008

5 Exploratory Test Documentation Lessons

eurostarNoteBookPage
While at Stockholm for the EuroSTAR 2007 conference I managed to conduct testing on a public booth and have collated some simple lessons on Exploratory Test Documentation.


I read James Bach's post on Amateur Penetration Testing a few weeks before going off to Stockholm for the EuroSTAR 2007 conference. While there I managed to recall some of his techniques while using a few of the free test training booths provided by the Stockholm authorities in their fair city.
Michael Bolton gave a talk about his Tester's Notebook. From which I gleaned a few tips in effective notebook usage.
Lessons from both Michael and James led to the production of this post.
While I reviewed my notebook pages covering my time in Stockholm I found my notes on some booth exploration where I found a vulnerability on a booth in Stockholm.
I include those notes here to try and illustrate a few lessons about exploratory test documentation.
Lesson one: Develop better handwriting than I have so you can read your notes at a later date.
logNotesNote: I made these notes @ Eurostar, after I conducted the testing. The title "Eurostar" does not mean that I conducted the testing @ Eurostar itself. The title "Eurostar" on the page tells me where I sat when I wrote the info. I have not included the name of the venue hosting the booth, just in case the owner of the venue hasn't fixed the problem. I did raise a defect report. I left it in their suggestion box.
Lesson two: write down what you did
This scrawl tells me the order I tried to do things:
I tried to get hold of a pdf and either use the download dialog, save dialog or some other dialog on the screen to access the file system. But no luck - unresponsive pdf links and I could not find a way to access them (so many unresponsive file types - zip, doc, EVERYTHING seemed locked down, so I stopped trying that attack).
I tried a few shortcut keys that I know, but none of them caused any visible effect that I could figure out how to exploit.
I used the Shift+Alt+PrintScreen control key that James mentioned in his blog post (which I didn't know about until I read it there) and that created an interesting display, but again nothing that I could figure out how to exploit.
And then.... "E"... well I didn't even finish writing it as a word because a diagram seemed more appropriate.
Lesson three: use diagrams, and don't worry about the formality
graphicLogNotes
This booth had a little icon on the top right which took me to the manufacturer's site - great. I found support forums there and manuals so I had a quick browse around for any info that could help me, and I read a whole bunch of useful hacking info about config files and key shortcuts I could enable, but first I had to get to the file system, and I had not figured out how to do that.
But wait a minute... the manufacturer has a .exe download link, and when I click on that I get a file save dialog. And as soon as a file browse dialog gets displayed, I can access the file system. And then the opportunity to exploit becomes available. So at that point I reported the vulnerability.
So much for the self promotion of a secure booth manufacturer.
Lesson four: Make notes during the session.
Lesson five: If you don't make notes during the session - make them as soon as you can afterwards.
Fortunately I had a very short testing session and could retain it in memory until I managed to write it down.

Friday, 18 January 2008

Resource Hacking for Beginners

I'm going to introduce you to the Win32 testing tool that I use for looking at application resources, and which I used to find Mercury Screen Recorder secrets.


My first tool of choice for viewing the resources in applications is called...Resource Hacker.
Resource Hacker (TM) is a freeware tool for viewing the resources embedded in a Win32 executable.
I find this to be a handy tool for checking what hidden secrets await me if I test the application hard enough:
  • What error messages have I not triggered?
  • What icons and pictures have I not seen?
  • What general strings have I not encountered?
A handy little tool to have on your USB stick.
I also have the slightly more dangerous XN Resource Editor which, as its name suggests allows you to edit the information - I haven't found a good reason to do that when testing though so I haven't really used this as much.

Wednesday, 16 January 2008

Mercury Quality Center in "It Supports Exploratory Testing" Shock

I can't believe I'm about to promote Mercury Quality Center, particularly in regards to exploratory testing, but here goes...


Mercury appear to have licensed Blueberry consultants Ltd Technology into Mercury Quality Center as part of their Mercury Screen Recorder.
This is great news for any tester that has Mercury Quality Center installed on their machine at work, as the Screen Recorder is an addin that you can download from your add in page. (a submenu off the Help menu)
I had a quick look in the resource information in the MSR Recorder.exe file and saw the mention of BB Flashback. So I'm not sure which functional suite has been licensed from Blueberry Consultants. BB Flashback might just be the default name from which all components are built, certainly it looks like BB TestAssistant to me.
Here is the resource information that I found:

<description>BB FlashBack</description>
<assemblyIdentity
    version="1.0.0.0"
    processorArchitecture="X86"
    name="BB FlashBack"
    type="win32"
/>
The icon sets seem to suggest that it is BB TestAssistant.

So how can you use this for exploratory testing?

The default use of the software in Quality Center is to create a defect in Quality Center from the movie.

But if you choose to 'Edit the movie before submiting the defect to quality Center' (yes I know there is a spelling error there, but the spelling error is present in the dialog above). So 'Edit...' it then you get access to a more common BB Flashback/Test Assistant view

And from here you can save, export, edit, annotate, etc.

I'm happy to have found this but I'm kicking myself for only discovering this now. So the main reason for blogging this? You have no time to waste if you have Quality Center in-house - start using the Screen Recorder now.
Obviously a more expensive way of getting the BB recording technology on your computer but, as I say, if you already have TD installed (I found this using version 9 - I'm not sure if the Version 8 recorder is based on this technology or not)  then you get access to one of the most commonly promoted Exploratory Test Tools.
I installed the Add-in from Quality Center using the Addins submenu of the Help menu. And on the install at work, I had to go to the 'more addins' section.
Now... at least to the point of helping me record testing sessions... Mercury Quality Center can support me in my exploratory testing sessions.