Potential Energy: The Physics of Testing and Automation

How to avoid wasted energy and effort

I’m not a physicist. In fact, I struggled with physics in college, mostly because of the math involved. One thing that did stick with me, though, was the concept of potential energy. (To be fair, I think I learned those concepts when I was much younger.) 

In short, potential energy is energy that is available to an entity, but the value of which has not yet been realized. You can read a much more cogent explanation of potential energy at this location.

Why am I talking about physics and potential energy? Much like potential energy, testing in and of itself provides no value. None. Nada. Zilch. “So why am I reading this?” you may be thinking. Before you close me out of your browser, read just a bit further…

What I’m saying is that testing, and by extension test automation, only provides information; testing doesn’t fix problems. It informs about what seems to be working, and what seems to not be working. It delivers other information with which decision-makers can make decisions. Without action on that information, the information has little if any value. And there we have it, the analogy to potential energy.

What is done with the information provided by testing? Well, that all depends. Responsible leaders digest the information and collaborate with the team to understand the risk related to the current version of the product or application.

What do I mean by risk? What do YOU mean by risk? Cavalier-ness aside, risk can generally be thought of as jeopardy to the business value your product has for your company. Risk is context-specific; an issue in an application may be high risk to one organization, but the same issue in a different application may be low risk to another organization.

Risk comes in many forms, but the ones we tend to see in software testing are:

  • The risk that a client/customer/user encounters a known issue that causes a loss of revenue or an increase in cost to the company
  • The risk of the unknown for testing that we identified but didn’t or couldn’t complete, e.g., the “known unknowns”
  • The risk of the unknown for testing that we didn’t think to do, e.g., the “unknown unknowns”

What does (or should) “product leadership” care about? That team wants (or should want) to know what could go wrong with an application release, how likely it might be, how impactful it might be, and how quickly the team can mitigate an issue. They also want to know what areas of the product have not been sufficiently evaluated and what could happen if there were to be a failure in an under-evaluated area. Basically, what is the risk to the company and the organization?

Testing without acting on appropriate reporting is wasted effort and wasted energy. Testing that provides appropriate but unused information is like potential energy– it’s unrealized value.  Testing, and by extension automation, provides no value if no one acts upon the information it provides.

What if your testing isn’t producing the kinds of information needed by the decision-makers? If your testing is not providing information that your decision-makers can act upon, perhaps it’s time to rethink your testing approach. If your test automation is not providing actionable results, you might want to change your automation approach or your automation logging paradigm. Remember, software changes over time and the needs of the software and testing approaches need to evolve to stay relevant, appropriate, and valuable.