We live in a world where everything is being automated, from simple tasks to very complex ones, across all industries.
“The industrial revolution allowed us, for the first time, to start replacing human labor with machines.” – Vitalik Buterin
Buterin was referring to the First Industrial Revolution that occurred in Europe and the United States that started in 1760 with the invention of the steam powered engine. During this time, part of human labor was replaced by machines, and since then, with every new discovery and technological step forward, we’ve continued this trend.
In the software industry we see the same pattern of replacing manual execution with automation, defined in the Continuous Delivery, Continuous Integration and Continuous Deployment concepts, and implemented by developing and using the necessary tools to accomplish that. We have Jenkins or CircleCI, docker and Kubernetes, Bash scripts, Ansible, Selenium, just to name a few of the many tools out there. And we can see a constant push to speed up everything through automation, a constant pressure that results in a continuous improvement of the way we deliver software.
Kelsey Hightower, principal engineer at Google Cloud, refers to one of the latest tools in automation: “GitOps: versioned CI/CD on top of declarative infrastructure. Stop scripting and start shipping.” That is what it’s all about in the software industry.
As a software tester, I was involved in a project consisting of several web components (web-UI dashboard and lots of APIs) and software for payment terminals. The terminals are the well-known POSes available anywhere you are paying something. But in this case, we worked on the next generation POS system, having touchscreen instead of buttons and running a custom flavor of Android.
It is a “mobile tablet terminal,” where you can tap (if contactless) or insert your card to buy a Grande Caramel Macchiato from Starbucks. For a software engineer and, more specifically, for a software QA engineer, they are hardware devices that require human interaction/movement: read the screen, touch the screen, insert/tap/swipe a credit/debit card.
The testing strategy was to cover through automation the web components and a natural further action was to extend the automation to terminals as well, but the implementation was not straightforward.
Why? What’s wrong with the terminals?
The terminals are Android devices with specific hardware capabilities. They allow card data reading through swiping, inserting the card or just tapping it onto the device. On Android, the existing frameworks can be used for testing whatever application is installed on the device. The card readers, the chip, wireless or magnetic band card readers, or even the PIN numbers can be mocked, so a lot of automated integration tests, localized on some specific modules in the app, can be performed. That means that a lot can be covered with white-box testing.
How about a genuine black-box, end-to-end (E2E) testing of the payment application, using real test cards and the real web components that are behind the scene? Well, we can do it manually: send a transaction using the web components; when it appears on the device screen, visually check what and how is displayed on the screen; insert the test card, a PIN number and complete the transaction as the test case tells you to. And that’s it.
After two hours of testing like this, fatigue sets in and you start missing obvious issues with the UI or an incorrect field in the transaction status on one of the web components.
The team needed approximately five days to run the entire test suite in order for a new version to be released to the customers eager to receive it. And, at the end of a release validation, we’re left wondering, “Did I test enough?” “Maybe I missed an issue…” or “Maybe I didn’t cover a scenario because…” This is the nature of manual testing.
And this is what is wrong with these terminals.
There must be a better way. But how?
On Android, UiAutomator (or a similar framework) allows us to test the UI in a black-box manner on the real devices. How about automating the card insert, tap or swipe at a specific moment in time? We need to be able to move the card, when and how we want it. We also have to enter the PIN number.
We need an entity that does all these movements in a coordinated way, when we tell it to do it. So, we go to our good old friend, the Internet, to look for ideas. And …
What we need is only a simple robotic arm, remotely controlled by our test framework through a communication channel that works from both worlds. This idea of a robotic arm doing stuff is not new, and luckily the Internet is full of DIY projects with Arduino boards or even more complicated RaspberryPi’s boards, servo motors and other electronic and mechanical parts.
Back to the drawing board …
Through quite a bit of research and many cups of coffee, an idea came to me. I began to remember the “old stuff” I learned years ago, when C language was running the world. (It is still running the world, but now we don’t see it because we are too busy to use these new programming languages that are more or less, its descendants.)
After some online purchases and a visit to a toy store for the necessary hardware, I developed the following Proof of Concept.
The Proof of Concept
The list of materials needed for the PoC:
- Arduino Uno board
- 5V power source
- Bluetooth communication module (HC05)
- Colored wires
- 5 x MG996R servo motors
- 3 x SG90 servo motors
- Toy parts
The Arduino board controls each Servo motor: direction, speed of rotation and rotation angle. By the way, these specific servo models allow only a rotation angle of 180 degrees, which is more than enough for our needs. Also, we use a polarized coordinate system for tapping the screen (e.g. entering the PIN number when required) and simpler operations like card insertion or card tapping (for wireless payments).
This is how the wiring schema looks like for a set of 3 x MG996R servos and 3 x SG90 servo motors:
- The RED and BLACK wires are used to power up the board, the Bluetooth device (HC-05) and the servos. We need a really powerful power source as the servos consume quite a lot of current.
- The servos are controlled with the wires coming out from the digital pins (see the image below, with a more detailed Arduino board).
- The HC-05 Bluetooth device is communicating with the Arduino board through the first two PINS from the Digital PINs group.
The Arduino board must be programmed to control each servo-motor and this programming depends a lot on the wiring. If you are not familiar with the Arduino Uno wiring, it is further explained next, together with the source code part.
Look at the following image with the Arduino board and pay attention to the pins on the right. Starting from Pin number 2 to Pin number 13, they all can be used to control the servos. Pin numbers 0 and 1 are used for the serial communication between the board and the HC-50 Bluetooth module.
If you are interested in writing code for Arduino boards, you can start from https://www.arduino.cc/en/Guide/ArduinoUno and, step by step, become an expert. There are also plenty of materials available online and an Arduino board is inexpensive, so stepping into this field is fairly achievable for most. Try it and see what amazing things you can accomplish.
The automation framework running tests on the Android device will send instructions to the Arduino board, over Bluetooth via HC-05 module, at specific moments in time. Such an instruction will make the Arduino board execute a movement of the robotic arm through the electric motors. This way it will either insert or extract the card, enter a PIN number or TAP the card for wireless payment, depending on the desired action from the running test on the Android device.
Details regarding the implementation of this solution are to follow in part two of this series.