From QA to Mobile Dev

Diana
5 min readAug 19, 2021
Photo by Debby Hudson on Unsplash

When I was a tester, testing for me was more like collecting evidence about any misbehaves from the system-under-test. As a team member, my role was involved in some important delivery processes, such as requirement analysis, defining acceptance criteria, and all kinds of different testing activities that were designed to facilitate the release process, and in aid ensuring the quality of the delivery.

Let’s focus on the testing part for now. In particular, UI Testing.

Often, UI Testing requires the tester to have a correct understanding of the acceptance criteria. The domain knowledge is playing a big role here in order to generate logical and reasonable testing scenarios. Based on the familiarity of the product and knowledge of software testing, sometimes, edge cases can be captured through the testing journey. For example, to mimic an app user quickly pull-to-refresh on the screen of the app multiple times, which might cause the app to crash due to some race condition, or switch the navigation tabs frequently in some conditions might cause the app to behave weirdly, etc.

Edge cases can be tricky. Most of the time, one edge case pops up only when a tester does not follow the normal steps to explore the app. Sometimes, it was because of a special state that the app was in at that right moment, to put everything together again to produce that particular moment is not easy if the necessary steps were not clearly given. And if the tester’s mindset was more of wandering around than expecting any exceptions, it would be difficult to remember what was going on exactly. When it happened, it could be clueless to trace back to find out which steps were taken that caused the app’s wiredness. Especially in an ad-hoc type of testing, they might not even be replicable.

If the crash is not replicable, then we will have to let it go.

I am sure it sounds familiar to you as well.

To be honest, every time I heard this, I felt like a policeman letting go of a criminal. Not a pleasant feeling for sure.

If all the testing work that I conducted was gonna require focusing on the testing scenario, analyzing it, carefully set up the test, and be ready for something interesting to happen. Of course, I could do that.

The problem was, I realized, without much information about how a particular part of code was working, it could be difficult to set up something meaningful from the UI test point of view. And if the theory that my testing scenario was based on was vague, How could I be certain to expect something would happen.

I could guess. But it was not being productive.

Luckily, when I was in the Mobile Arch team, the mobile developers showed me how to use the simulator, check the debug trace stack in Xcode to trace back what might be the cause when there was a crash. It saved so much hassling for me if I did not have the correct tool, I’d had to go back to redo the test multiple times to find useful information, or it could be worse, the same crash might not occur to me again anymore.

The testing started becoming more interesting to me, by using the Xcode debug console, code execution information can be printed out for me to inspect. It helped to create more testing ideas that I might not have thought about if only looked at acceptance criteria. What makes it even better is, with the debugging tool, an issue that might not necessarily cause an obvious UI defect is also exposed in front of a tester. Such as a layout issue, in the view hierarchy debugger, the square with an exclamation mark hanging up next to the Xcode toolbar, might be something worth pointing out and getting fixed.

Not only does using the code debugger tool opening up a window for me to dive deep into the code but also, it is visually presenting the information for me. Those design terminology that appeared in Figma, such as padding, properties for font, or the hex color value, are not something arbitrary to me anymore, because they are addressed in the code.

The testing that I was doing still going through the same pattern, understanding the scenario, analyzing it, set-up properly. The point is, with the code debugger’s help, I could set up something meaningful for testing purposes. And I was much more confident and certain about something happening. It was an improvement, but not 100%.

Now I am a mobile developer, testing in a programmatic way is part of my job.

However, how to transfer the manual testing mindset to adopt the way of writing good unit tests, takes me some time to figure out.

BDD- Behavior Driven Development is the key.

I need to quote from Uncle Bob here

BDD, a variation of TDD, was created to help us think better about higher level requirements, and drive the development of systems using a language better than unit tests.

Then in order to validate the data coming from the backend, for example, I could have a unit test like this:

context("GIVEN Totals are available") {    describe("WHEN graph item is loaded") {        it("THEN should contain marketTitle") {}
it("THEN should contain marketValue") {}
}
}

It reads like this if backend data forTotals are available, when it is loaded, marketTitle, marketValue and Bool isTotalsVisible are expected as stated.

It works as if a pair of machine eyes examining the app on the app’s UI, just like how a human would examine the app.

The power in the BDD approach is, again, quoting from Uncle Bob,

if we can formally enumerate all the Givens and Whens, then a tool could determine whether our requirements document has executed every path, and could find those paths that we had missed.

So we enumerate the Given and When from the above example, it is obvious to produce another test case like this :

context("GIVEN Totals are not available") {    describe("WHEN graph item is loaded") {        it("THEN should not contain marketTitle") {}
it("THEN should not contain marketValue") {}
}
}

Given-When-Thenstructures the unit test scenario in a human-readable manner. And it is not hard to understand by all stakeholders.

Yet, to write a unit test to cover unpredictable behavior that a human could do to an app is much harder than testing against a code logic.

Human behavior is certainly not predictable, but machine/code behavior is definitely predictable. That would be another topic about how to structure the code in order to easily test it. But if we have unit tests to cover, for instance, each loading state, failure state, then the odds for a rare edge case would be much lesser.

I am not an advocate of 100% testing coverage.

I believe that refactoring unit tests, adding necessary unit test cases along the way of building the app is just essential.

Continue the same pattern that I used before, the part of how to set up for a unit test has a significant impact on the testing result, and the point is, now, I can be 100% certain about what’s happening during the test and why.

How many tests need to be written, and when the test is done, is something that I am still discovering. At least, with a correct tool, one can always discover wider and deeper.

--

--