In Rush’s song Freewill, Geddy Lee sings, “If you choose not to decide, you still have made a choice.” That’s a profound statement, but if you listen to a lot of Rush, you will hear many profound and provocative statements. The question here, though, is whether that specific lyric is apropos for discussions regarding the use and non-use of the information provided by testing.
A while back, I wrote a thing about physics, potential energy, and software testing. It went largely unnoticed. C’est la vie. Then, someone posted about not seeing enough discussion about “risk” in software testing posts. So, I reposted that thing I wrote. Suddenly, lots of people are reading and commenting on it. Sweet! ‘Geaux’ me!
And then it happened. Someone, *cough* Damian Synadinos *cough*, did the unthinkable: he made me think more deeply about what I had written. Full disclosure– Damian and I are friends and colleagues, so his prodding and challenging are both expected and welcome. Additionally, he had a good point in his question, and he added some interesting context during our backchannel chat. Though I gave a brief response to his question on that LinkedIn thread, with his blessing, I’m going to respond via blog to allow for additional detail.
As part of that thread, Damian asked a question, the gist of which was this: is having gold valuable, or do you have to use the gold for it to be valuable? Like many Rush lyrics, that’s a profound as well as a provocative question! Does having something that others consider valuable make that thing valuable to you? Wow.
With gold, I think the answer is easy: “yes, of course, it’s valuable.” Having gold is valuable because others consider it valuable. They’ll buy it from you or trade you for it if you ever decide to part with it. Even if you aren’t currently using the gold, it still has “external value” because others will engage in a business transaction with you to have some or all of that gold.
Does that analogy hold for information provided by testing? In general, I say “no, it does not hold,” but as always there is context to be considered.
If we have information but choose not to examine it, we can’t realize the value of that information. Let’s say that I have information about the presence of poisonous snakes in a field of tall grass. I type up that information and send it to Mike because I know he likes to walk in that field. I then send the information via an email with the subject “INFORMATION ABOUT SNAKES IN THAT FIELD YOU LIKE!” Mike now has information that he can use to determine the riskiness of walking in that grass. If Mike sees the title of that email, he now knows he may have some risk information about that field, but if he doesn’t read the email, he cannot ascertain the risk described by that information. In this case, that information has no value to Mike because it can’t be used by Mike. Similarly, if software decision-makers know they have some product information that was produced via testing, they have to actually examine the information in order to gain any value from it.
Note that the situation I describe above is different from evaluating testing information and deliberately choosing not to act on it. This latter example is a conscious choice to amalgamate the information supplied by testing with other information. That can be OK. If other business factors outweigh the testing information, then choosing not to act on the testing information may be the prudent course.
In retrospect, I may have gone a bit far afield. My original “potential energy” post started with this idea: if you run automated tests and you don’t look at the results, you don’t gain any value from that automation. I then extended that thought to testing in general. I still believe that if organizations ignore their test results when making decisions about risk, then the results provided no value regarding risk. I will stipulate, however, that this likely happens far more often with respect to automation-based information than from “non-automation-based” information; this could be a result of failure fatigue.
As previously stated, if an organization reviews its test results, regardless of its source, and those results don’t change the narrative regarding risk, that’s OK. Responsible business decisions are made with information from many sources; some sources carry more weight in some situations than others; such is the nature of business. If an organization reviews test results and chooses not to act on them, that doesn’t mean that the information provided was valueless. It means that in the larger context of that specific business decision, other factors eclipsed the information provided via testing.
Did the analogy from the Rush song hold? I’m not sure. Due to context, perhaps this should be left as an exercise for the reader.
Get Paul’s take on “being responsible with automation, testing, and other things” in his blog, Responsible Automation