DEV Community

Bertil Muth
Bertil Muth

Posted on

Manual testing

At certain points in my career, I worked as a „manual tester“: I executed test cases step by step, manually, in the GUI. I compared what happened in the GUI to the expected values in the test documentation.

The customers paid for it, but it wasn‘t fulfilling work. Even when I had written the test documentation myself.

With some effort, most of the tests could have been automated.
Still, I think there is a place for manual testing, e.g. exploratory testing.

What are your thoughts and experiences?

Top comments (2)

Collapse
 
jillesvangurp profile image
Jilles van Gurp

Do CD and limit your manual testing to production and focus on finding things that can be improved incrementally. If you set up monitoring properly, have users, and do AB testing or other forms of phased rollout, you'll have plenty of opportunity and feedback to catch and fix any blocking issues. Whatever you do, don't add ceremonial bureaucracy to deployments. Deployments need to be instant, a no-brainer, and totally safe to do any moment of the day (including Friday). That also makes it easy to roll out any urgent fixes, which should be rare. If that becomes a regular thing fix your automated tests.

Collapse
 
itsasine profile image
ItsASine (Kayla) • Edited

The worst part of my job is manual testing.

To others in the QA department, the worst part of their job is automated testing.

¯\_(ツ)_/¯


For me, I'd rather pivot my career towards development than to ever write another manual script, but I'm also in a position where I can be like "hey new person, go write this manual test" so I'm not that motivated yet. I just want to code. Or as I prefer to say, I'm so lazy that I tell the computer to do the testing for me.

The spots where I think manual testing make the most sense are

  1. "look and feel" kind of tests
  2. pre-release regression testing

For things like colors and positions, those absolutely could be tested automagically. The technology is totally there. It's just not worth it to do so. The requirement is rarely that a button is #428BCA and 55px margin-left. The requirement is that it's blue and off to the left with a little whitespace. Also, tests like that are highly likely to end up being hardware specific. Checking an element is at (34, 28) may work on my external monitor but not on my built-in Retina display, and never work at all for someone else on the team or on Jenkins in headless Chrome.

For pre-release testing, I don't think anything major should be found (ie: run your full automation suite frequently not just before deployments) but some quirks that can only be seen once the whole thing is packaged up and ready to go are easier for a human to find. Your application might have 5 different cancel buttons and 4 of them go to the main menu and one goes to a place that makes sense but, at a higher level, clearly wasn't intended since all the other ones go to the main menu. The automation tests will happily pass since you wrote it in the way that made sense but now you can see with fresh eyes that it's dumb.

Some people super duper enjoy thinking of all the ways a user can do things and poking around the app for hours just seeing what there is to find. I'd rather program. Neither is better and both have their place. I'm awful at caring about and coming up with edge and corner cases; more power to those who can -- please keep doing it so I don't have to.