DEV Community

Beekey Cheung
Beekey Cheung

Posted on • Originally published at blog.professorbeekums.com on

The Role Of Manual Testing

I’m a big fan of automated testing. The math just works out. Why spend hours running a series of test cases when a computer can do it in seconds. With benefits like this, my zeal for automated testing can make it seem like I think that all testing should be automated. However, this couldn’t be farther from the truth.

There is little room in software development for pure rote manual testing, but that’s not the only reason to manually test software. Automated tests are great at telling you if the software works the way you intended. They do a poor job at telling you if the software works the way it should.

Let’s look at Google Calendar as an example. The default view is to show you a week of events. An automated test can easily be written to verify that the default is always a week and fail if someone submits code that results in the default being something else. What an automated test can’t do is tell you if a week is the best default there is for users.

User testing and research is usually done to get to the conclusion that a week is the best default view. Software is ultimately built for users after all. It’s difficult to find great functionality for users without having them actually use the functions.

Talking to users isn’t without its challenges however. Some users will overreact. They’ll go “I hate this thing!” while continuing to use the feature. Other users will underreact. They’ll go “This is alright” whilst never using the feature. Gauging the significance of feedback can be difficult without really knowing the person, which would take a prohibitive amount of time.

This is where manual testing shines. Using the product provides context, perspective, and empathy with users that would not exist if we just built something and tried to see how users reacted. If you’re annoyed with an interaction in your software, then that’s a good gauge to compare a user’s feedback with.

I’ve lost count of how many times I assumed a user was overreacting, only to realize how bad an interaction was when I started using the feature myself. For Maleega, I created permissions around editing the summary of email threads. Beta testers overwhelmingly gave me negative feedback around it. But the number of beta testers Maleega has is far from enough for statistical significance and the use cases in my head told me it was necessary in the long term. It was only when the permissions got in my own way did I realize how wrong I was.

There are also questions of who a good user for a product is. User personas can be helpful with figuring this out, though the easiest parts to fill out are often the least useful in determining who a user is. Age, gender, profession, and location are fairly clear cut form fields and make for simple segmentation. But the most important part of determining who your user is are their habits.

My original persona for Maleega included freelancers and consultants on the basis that these people heavily used email for their work. Since they get paid for their time, they would also be more willing to pay for a subscription service that actually worked and saved them time.

Some beta testers were really put off by Maleega. Some really loved it. While I had lots of feedback about specific features here and missing features there, I was missing the important factor in determining whether someone loved using Maleega or not. It was only after combining all this feedback with my own usage did I realize that that factor was a single habit: organizing email.

People who like organizing email hated that Maleega was missing folders and labels, a feature that was intentionally left out. People who didn’t like organizing their email loved that Maleega provided them with a little bit of organization with barely any effort. I would have never made this realization if I wasn’t a constant user of my own product. I needed the perspective that only comes from using the product every single day.

This usage is the most important role of manual testing. It may not follow a specific test case, but it is testing nonetheless. And its utility comes from not being repetitive rote actions. Test cases are generally structured around making sure functionality works. What we really want in manual testing is to make sure we get a sense of how our users are going to feel. No automation can replace a human in that job.

Top comments (3)

Collapse
 
kateyanko profile image
Kate Yanko

"Automated tests are great at telling you if the software works the way you intended. They do a poor job at telling you if the software works the way it should."

As someone who is new to QA testing in general, this was really a helpful distinction.

Collapse
 
m0camp0 profile image
Marek Czampowski

Does that mean all functional tests can be automated? And then manual testers should still know the software specifications but still able to proivde feedback in terms of usability?

Collapse
 
pbeekums profile image
Beekey Cheung

I'm of the opinion that all functional tests can be automated. Some are obviously going to be harder, especially ones dependent on third party APIs. Mocking these out can be somewhat challenging depending on the workflow. Oauth logins come to mind. APIs with very large payloads are another case.

But I think with some time and thought, any test can be automated and it is worth putting the time to do it. I went months without tests around code that interacted with the Gmail API. The return payloads were simply too large to put nicely into a unit test. But it was also extremely tedious testing against it manually. So I just started dumping those payloads into files and built integration tests to load those files. In the past 3 months, I've probably saved myself over a hundred hours of testing and it cost me 10-15 hours of automation work.