Mobile automation in Agile Projects

Why Automation must be involved when dealing with tight mobile release cycles

After joining Softvision, I had the opportunity to work with one of the biggest worldwide e-commerce marketplace players, that offer activities, goods, travel, and services in hundreds of markets and dozens of countries, with mobile apps being in top 25 most-used apps in the United States.

Their long-term business strategy included items like shorter mobile release cycles (first step was switching from 5 to 3 weeks, now working to get it down to just 2), an even better overall application quality with no major production defects, on time releases, more features with each new version, adding a multi-brand platform (different brands and companies using the same software solution), a better performing app despite it’s larger features number, etc. Most of these meant one thing: automation needs to be involved.

One idea was clear to us even from the beginning: we need to involve everyone in the automation effort. Just having a bunch of dedicated automation QA’s writing and running some tests it’s not enough, and it’s usually a slow and painful process in getting the needed results. Poorly written (or not really automation friendly) tests from the manual side, new features, bug fixes and random UI changes from the development side, these are all things that usually affect general stability and coverage progress. So we had to include and work with everyone in the project, starting from QA’s (both manual and automation), continuing with developers, leads, and managers.

Of course, as expected, we still have an automation team. Even though it’s not solely involved in the effort, it’s still responsible and focused on most of the tools and the overall process, having also a coordinating role. The most high-level tasks include:

  1. Choose the most suitable automation solution (IDE and language) – This was decided by taking into consideration that developers will be involved, so native frameworks with specific languages were the way to go.  
  2. Creating and maintaining a strong automation framework – Having a reliable and scalable framework is really important especially in agile projects where new versions with features and improvements are released every couple of weeks.
  3. Creating and maintaining automation tools – These are almost as important as the automation framework. It’s very useful to have tools that automatically filter, order and group CI failures, take useful screenshots and gather logs, send emails when new crashes appear, generate test runs and stability dashboards, compare different branch runs.
  4. Established a formalized process and make sure it’s followed – This is the coordinating part and it’s also important. The process includes tasks, guidelines, and rules to follow for all the involved teams and stakeholders like clean and complete manual tests, features integration branch automation validation, minimum automation new feature coverage, code reviews, stability validation, etc.
  5. Write functional and performance tests – This still is a really important part of the automation team’s workload, even though it’s not the only one responsible for. It’s usually done through surges (periods of time when most of the automation engineers focus exclusively on writing tests, that are usually triggered when the overall stability is very good, and no new mobile OS’s or major app UI changes are planned).
  6. CI stability – This includes investigating and fixing automation failures, logging bugs, and crashes, updating the tests when new OS’s or UI redesigns are delivered, sending automation sign off emails, etc.

All the automation engineers are responsible for these tasks, some of them in a rotation scheduled manner (e.g. CI stability where one engineer is ‘on call’ for one week at a time) or by components (e.g. each project component has an assigned automation person that helps with new features automation validation).

The development team is focusing on two really important things: make sure new features don’t break existing tests and help to keep the coverage as high as possible.

To achieve this, all the new features must, first of all, be validated against complete automation runs. Each feature integration branch needs to have a complete automation run, the results were then compared with the latest master run. The developer is responsible for the new failures and crashes investigations, and the branch doesn’t get merged until the issues are fixed. The assigned component QA automation offers support if needed, and usually signs off that the branch is ready to be merged.

Another responsibility is delivering automation tests for each new feature. There is a minimum coverage percent that should help to keep a high overall coverage, that could be either fix (e.g. 20%) or dynamic (a value that takes into consideration the overall coverage, and it’s as close as possible to that). Again, this is done with support from the automation team, helping out with choosing the tests to automate, general hints, code reviews, etc.

Another role that is important at this point is the project manager. He needs to fully understand that both the feature integration testing and the minimum coverage percent, take time. They should include that in planning, especially when committing for a certain release, and also consider them as important as every other part of the development cycle. It’s possible to postpone the automation tests for a release or so (more often for features that are not sure to be in production for a long time).

Last but not least, the manual QA team has two main responsibilities: make sure the manual tests are automation-friendly, and help triage automation failures.

All the new tests are created with a couple of things in mind: clear and complete requirements, steps and expected results, and also just the right amount of scenarios. Now, what’s the right amount? Usually, in this case, less is better. Having long automation tests, that validate a large number of scenarios, maybe starting and restarting the app, it’s usually liable to no app specific failures (e.g. simulator, general environment, etc.) and hard to debug. Short, clean and reliable tests it’s the way to go.

When the CI failure rate is really high, and it’s close to a release, the manual team take over some of the failures and help the automation team signing off. The approximative number it’s discussed a couple days before the release to give the manual team time to plan their regression.

Automation is usually a long-term investment and should be planned and managed accordingly. It’s becoming a critical component of the agile process in any of its forms: continuous integration builds, unit tests, automated deployment or the traditional functional/performance tests.