I practically grew up watching game shows. I watched pretty much all of them: Jeopardy, Joker’s Wild, Press Your Luck, Match Game, Hight Rollers, Hollywood Squares; you name it I probably watched it. One of the ones I used to watch with my mom a lot was Wheel of Fortune. One of the spaces on the wheel was “Lose A Turn”; we’d always call that a “bad spin.” The following is a story of a bad spin.
Why am I referring to this story as a bad spin? I gave a talk for the Odyssey Conference entitled You Bet Your Life – Playing the Automation Tool Selection Game; this talk is delivered with a game show theme so the bad spin is an extension of that content. During this talk, I compared and contrasted two companies that I helped with choosing an automation tool. For each company, I evaluated its automation ecosystem, i.e. strategy, audience, and environment, to help select appropriate tools. Allowing the ecosystem to guide us allowed us to be successful in our automation endeavor.
Regrettably, I wasn’t always able to apply the ecosystem to tool selection. I once worked in an organization where, prior to me being hired on, decisions had been made and money had been spent in regard to a specific tool and the approach to using that tool. My marching orders were effectively “implement with those.” I wasn’t so evolved in my ecosystem conceptualization at the time, so I thought, “sure, seems reasonable, how bad could it be? Spoiler alert: it was bad, hence the bad spin.
Let’s look at each of the areas of the ecosystem and see where we faltered.
In the automation ecosystem, strategy has two sub-components: the “why” and the “how.” This organization had decided the “how,” but they didn’t have a clear “why.” Choosing how to automate before deciding why to automate is often problematic; it’s can prevent us from making responsible decisions about automation because we’ve already made decisions about how we’ll do automation; it also means that we’ve decided to automate before we decide whether or not it’s valuable to do so.
Throughout my time in this organization, I learned that the “why” was “we’re doing ATDD because we do scrum and ATDD requires automation”. I’ll leave it to the reader to dissect the specific problems with that approach, but in general, process-driven automation is a bad idea because it doesn’t take automation’s value, or lack thereof, into account. Automation must be driven by value, not by doctrine.
Suffice it to say that this organization didn’t have a well-thought-out idea on how automation would help them meet their business goals. This lack made the previously decided “how” suspect.
Like on Family Feud, that’s strike one.
In hindsight, or even in “at-the-time sight”, I think this was the biggest miss of the endeavor. Before my hire, the teams were “trained” on ATDD and then told, “we now do ATDD”. It’s probably important to note that the “training” was immediately following the scrum training where it was declared “we now do scrum”. The understanding and buy-in from the audience seemed uneven; on the team I was hired into, buy-in and morale were very low, to the point of being combative over the approach. In reality, no one on my team was bought in, even tentatively. The team members that found in value the approach became discouraged because the other team members “will never adapt”; the other team members called the approach “stupid.”
The audience’s tolerance was not considered and the approach was not appropriately socialized with them before rolling it out. The approach and tool were mandated, and they were not well received. The leadership team thought they and the teams were ready for this shift in both culture and technology, but the leadership team was mistaken and they were not prepared to make the hard decisions needed to achieve their goal.
That’s strike two.
We did not have consistent access to the testing environment which made it hard to develop and run automation in any consistent manner. The application being tested was not developed with automation in mind; some components weren’t even developed with testability in mind.
The “centralized QA team” had some environments and tools to use, but there was no appreciable coordination between the scrum teams and that QA team concerning testing and environment access. It’s not that there was animosity; it was apathy regarding potential synergy and force-multiplication between those teams. You may notice that this point straddles the fence between audience and environment.
In other teams, where automation was valued, different tools were developed because those teams didn’t buy into the UI for the “mandated” tool.
Strike three and the other team gets the chance to steal (to belabor the Family Feud analogy).
How Did It End Up?
It did not end very positive note. The team I was on never adopted ATDD or in-sprint automation for testing, at least not while I was part of that team. My major contribution to the team and the company was a tool that helped the testers perform one aspect of their job much faster than before; later in my career, this is the kind of tool I’d call automation assist. I’m very proud of that accomplishment, but I am disappointed that I was not able to spark much change.
As previously indicated, there’s some blur in my descriptions of the specific examples. I struggled with that blur when writing this post. This difficulty made clear to me something I’ve experienced: the three facets of the automation ecosystem are not as discrete as I laid them out, but that’s OK. The intent is not to identify any specific aspect of our endeavor as one ecosystem aspect or another; the intent is to be aware of important aspects of our specific ecosystems so that we can make responsible decisions about automation for testing, and avoiding a bad spin.