DEV Community

Cover image for Unit Tests Should NOT Be In Your Coding Evaluations
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on • Updated on

Unit Tests Should NOT Be In Your Coding Evaluations

I've remained silent on this topic for far too long. But now I'm about to go off. Buckle up...

In the last few weeks I've had some experiences with unit tests during coding evaluations that have left me exasperated and infuriated. This isn't the first time that I've run into these types of issues. But I'm finally fed-up enough to proclaim loudly and proudly that:

Unit tests should have no place in the coding evaluations that you foist upon prospective candidates.


That may sound like heresy to some of you. (HINT: I don't care.) But if it really bothers you that much, there's a good chance that you're part of the problem.


Image description

I'm not a heretic

Before I explain exactly why unit tests should play no part in your coding evaluations, I want to make it clear that I'm not taking a stand against unit testing in general. Of course you should be using unit tests in your dev environment. Of course you can vastly improve the quality of your coding efforts through copious unit testing. Of course you want all of your devs to be comfortable with unit testing.

So of course you should be including it in all of your coding evaluations... Right???

NO.

Allow me to explain...


Image description

Writing tests wastes time and can be disrespectful to the candidate

During the normal day-to-day operations in your dev environment, it's typically unacceptable for someone to say, "I finished this task and submitted the work - but I didn't write any unit tests because I simply ran out of time." During normal "dev operations", a task isn't actually done until you've accounted for proper testing. But this standard should not apply during coding evaluations.

Most coding evaluations are timed. They give a set period in which you must fix a bug or build a feature or alter an existing feature in some way. Often, these time constraints can feel daunting - even to senior devs.

Imagine you have one hour to build some feature in React. Maybe it's not that hard. Maybe you're feeling pretty confident that you can knock this out. But in the middle of building it, you run into some kinda bug. You know... the kind where you sit there for a few minutes and just think, "Wait... What the heck's going on?" Maybe you forgot to hook up a key handler and it takes you some time to realize what you've overlooked. Maybe you just made some stupid typo that's not immediately apparent in your IDE. Regardless, the point is that, even in the simplest of tasks, sometimes you can "burn" 10-15 minutes rectifying something that you screwed up.

Eventually, you fix it. And you go on to build the complete feature right under the hour deadline. In fact, you built it well. You're feeling pretty confident about the code that you cranked out.

But you didn't get time to add any tests...

If your code is solid, and you completed the task, and you demonstrated a solid understanding of React, there's no way in hell you should ever be marked down (or eliminated from contention) merely because you didn't also have the time to slap unit tests onto your solution.

Let me be absolutely clear here. The mere act of writing a unit test is generally quite easy. Most of the testing frameworks have very similar syntaxes and they're designed to help you write tests in a way that makes semantic sense. They use verbiage like:



it('Should enqueue the items to the queue', (done) => {...});


Enter fullscreen mode Exit fullscreen mode

and



onDequeueSpy.calledOnce.should.be.true;


Enter fullscreen mode Exit fullscreen mode

So it should feel fairly natural (to any established dev) to write tests in this manner.

But even though they can be syntactically self-explanatory, it can still take a little... nuance (and nuance equates to: time) to add unit tests that are actually meaningful and properly account for edge cases. The time it takes to implement these should not be a burden in your normal dev cycle. But it's absolutely a burden when you're staring down a timer during a coding evaluation.

A few weeks ago I completed a coding test where they wanted me to add a lot of features to an existing React codebase. And they told me that it should all be done... in 45 minutes. It wasn't just that I had to add new components and get all the event handlers hooked up, but I had to be careful to do it in the exact style that already existed in the rest of the codebase. Furthermore, there were numerous CSS requirements that dictated precisely how the solution should look. So it wasn't enough just to get the logic working. I also had to get everything matching the design spec. And again, I had to do that all in 45 minutes.

But of course, this wasn't all they wanted. The requirements also said that all existing tests should pass and I should write new tests for any additional features that were added. And I was supposed to do all of that in 45 minutes. It was patently ridiculous.

If I've coded up a fully-functioning solution that meets the task's requirements, but I didn't get time to put proper unit tests on my new features, and you still want to eliminate me from contention, then... good. Believe me when I say that I don't wanna work on your crappy dev team anyway.

But these aren't the only problems with unit tests in coding challenges...


Image description

Broken tests

So maybe you're not asking me to write tests. Maybe you just have a buncha unit tests in the codebase that need to pass before I can submit my solution? Sounds reasonable, right?

Well...

On numerous occasions, I've opened up a new codebase, in which I'm supposed to write some new solution, and found that the tests don't pass out-of-the-box. Granted, if the "test" is that the codebase contains a bug, and the unit tests are failing because of that bug, and you now want me to fix that bug, then... OK. Fine. I get it.

But I've encountered several scenarios where I was supposed to be building brand new functionality. Yet when I open the codebase and run the tests that only exist to verify the legacy functionality - they fail. So then I spend the first 15 minutes of my precious evaluation time figuring out how to fix the tests on the base functionality, before I've even had a chance to write a single line of my new solution.

If this is the kinda test you wanna give me, then I don't wanna work on your crappy dev team anyway.


Image description

Secret requirements

Here's another <sarcasm>delightful</sarcasm> headache I've run into from your embedded unit tests: You're not asking me to write new unit tests, but you've loaded a whole bunch of tests into the codebase that are designed to determine whether the new feature I've built performs according to the spec. So I carefully read through all the instructions, and I crank out a solution that satisfies all of the instructions, and it runs beautifully in a browser, but then I run the tests...

And they fail.

Then I go back and re-read all of the instructions, even more carefully than I did the first time. And - lo and behold - I've followed the instructions to a tee. But the unit tests still FAIL.

How do I remedy this? Well, I've gotta open up the unit test files and start working backward from the failures, trying to figure out why my instructions-compliant solution still fails your unit tests. That's when I realize that the unit tests contain secret requirements.

For example, I ran into this scenario just yesterday. The React task had many features that should only display conditionally. Like, when the API returns no results, you should display a "No results found" <div>. But that div should not display if the API did in fact return results. And I coded it up to comply with that requirement. But the test still failed.

Why did it fail? Because the test was looking, hyper-specifically, for the "No results" <div> to be NULL. I coded it to use display: none. The original requirement merely stated that the <div> should not be displayed. It never stated that the resulting <div> must in fact be NULL. So to get the test to pass, I had to go back into my solution (the one that perfectly complied with the written instructions), and change the logic so it would NULL-out the <div>.

I had to do the same for several other elements that had similar logic. Because those elements also had their own unit tests - that were all expecting an explicit value of NULL.

If this had been made clear to me in the instructions, then I would've coded it that way from the beginning. But it was never stated as such in the instructions. So I had to waste valuable time in the coding test going back and refactoring my solution. I had to do this because the unit tests contained de facto "secret requirements".

If you can't be bothered to ensure that the unit tests in your coding challenge don't contain secret requirements, then I absolutely have no desire to work on your crappy dev team anyway.


Image description

Illogical unit tests

Maybe you're not asking me to crank out new unit tests, and maybe you're not hiding "secret requirements" in your unit tests, and maybe all of the tests tied to the legacy code work just fine out-of-the-box. So that shouldn't be any problem for me to comply with, right??

Umm...

Yesterday I was coding an asynchronous queue in Node and I ran into a series of unit tests that displayed intermittent failures. These failures happened because the exact timing of the unit test's queue checks were set on extremely tight intervals. Sometimes, all the tests would pass. Sometimes, one or two would fail. It was basically... random.

Of course, one easy way to "fix" this issue is to give these calls a little more breathing room. But you can't update the test files. If you try to do so, your git push is blocked.

I played around with this for a long time, trying to get it to consistently work without altering the test files. To no avail.

During the exact same exercise, I ran into a series of tests that would fail if you'd programmed the queue-pause feature in a certain way. But they'd pass if you wrote it in an entirely different way. But of course... there was no mention of the required (but hidden) way that they wanted you to program the pause feature. So you had to figure out through trial-and-error after the tests failed.

Of course, it doesn't matter that these unit tests were janky AF. All that matters to the evaluator is that the (illogical) tests didn't pass.

In the end, I suppose it's a good thing that they use these illogical tests, because guess what? I sure-as-hell don't want to work on their crappy dev team anyway. But I'm still annoyed as hell because I wasted hours of my time yesterday doing their challenge, after I'd already had a great live interview with them and after I'd already coded a working solution, because they can't be bothered to write logical unit tests.

Top comments (22)

Collapse
 
dvddpl profile image
Davide de Paolis • Edited

I don't wanna work on your crappy dev team anyway.

I feel your pain and share the same sentiment.
unfortunately it seems that the number of teams I don't want to work with is growing the more I grow old.
(and to be honest, I often would really like to be reviewing the codebase before accepting a job)
alway a pleasure reading your posts!

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

LOL. Good points! And thank you.

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

Right

This post is right about the struggle on live code during the hiring process on many companies.

Wrong

On the other hand I honestly think you got the issue completely wrong.

Just removing the unit tests in the hiring process is a patch that:
1- Does not fix the issue.
2- It makes it worse.

Suggestion:

What if we do this instead?

[From the PoV of the company]

Just send me the link of one or two public repos of you, let me review them and then let's discuss it on a "short" call just to ensure you are the one who really coded them and that you understand the basics and the details of what you did there by yourself.

Fallback:

I understand that some people decide to code only at work and it will be mostly (or at all) private repos, no issues with that, completely respectable.

If you happen to lack of any public repo, I can send you something to do (high value, short in time thinking into account the seniority required for the position) and you report back whenever you finish it (with a maximum of... 10 days? -negotiable- just so we don't get stuck in the hiring process for too long).

At the same time you'll get a public repo to improve and show to other companies, so it's not a complete waste of time even if you don't get hired.

Of course in both of those situations I'll expect tests, at least unit tests.

The why

If I cannot ensure your know-how around unit tests, and if you struggle so much with them, myself (as Tech Lead/Team Lead) or any other team member will have to dedicate extra-time on you (teaching, CR, compare your tests with the acceptance criteria to ensure you did OK, the follow-up and so on and so forth, with extra-care).

Depending on the project this is bad, OK-ish or completely OK, but for the sake of a good organisation we should know beforehand, I think this is also comprehensible to everybody.


I'm also tired on coding things I don't want on time spots in which I may better do other stuff just to comply with a technical interview, and I believe that this will solve any issue people have with technical interviews from the root of the issue.

Does that seem reasonable?

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

First, let me be clear before I address any other issues in your comment: Thank you. You get it. And while I've pretty much grown numb to some of the snarky comments that can occasionally accumulate on my articles from "dev bros", I do understand that it's perfectly acceptable for reasonable people to disagree with some of my points. (Or, in the most extreme examples, to believe that I'm just outright wrong in the entire post.) And you know what? That's fine! I sincerely appreciate this thoughtful response.

Second, I'll freely admit that I occasionally make bombastic statements (hey, it's my personal blog, afterall) but that sometimes there's room for... nuance in the things I'm calling for. Heck, even in this article, I'm stating in the headline that you should NOT be using unit tests in your evaluations. But then, in the middle of my own article, I state this:

Granted, if the "test" is that the codebase contains a bug, and the unit tests are failing because of that bug, and you now want me to fix that bug, then... OK. Fine. I get it.

So yes, even I know that unit tests shouldn't be absolutely 100% banished from evaluations. But what I'm really getting at is that, IMHO, the way that tests are used in the vast majority of evaluations is... gross. In fact, I think that they're often so gross that many companies would be better served to remove them altogether, rather than to keep deploying them in the way that they're currently doing.

I'm not gonna try to go through your response point-by-point and supply any kinda "rebuttals". But I really wanna highlight THIS:

Suggestion: What if we do this instead? Just send me the link of one or two public repos of you, let me review them and then let's discuss it on a "short" call just to ensure you are the one who really coded them and that you understand the basics of what you did there.

Yes. Sooooo much... YES!

Part of my frustration is that, at this point in my career, I'm a fairly "established" dev. Does that mean I know it all, or that I've mastered every technology and library at my disposal?? Of course not. But I've been doing this a long time. And I have extensive evidence, in the public realm, of my skills. So it can be insanely exasperating when someone can't be bothered to look at the mountains of evidence that I've put out there - but they just wanna shunt me through their "standard" evaluation and then blindly disqualify me if I struggle to make something pass their built-in tests - even when those tests are poorly written or fail to properly reflect their own specs for the task.

I love your suggestion. Because A) it acknowledges the piles of code that I've already put out there for public consumption, and B) it allows me an opportunity to, as you've pointed out, talk through and explain anything that I did-or-did-not-do in those already-published repos.

I've literally told potential employers before: "Look... I have a mountain of code already out there on GitHub. Why don't we just go through that first? If you fear that I didn't actually write the code, that's OK. I get it. We can do a live screenshare session where I'll walk you through any of the coding decisions that I've made. And I'll even make some minor, incremental changes to the live code, right in front of your eyes, to demonstrate that I understand the code and that I do, in fact, 'own' it."

Unfortunately, few hiring managers wanna be bothered with anything like that.

Finally, I'll add this response to your comment:

I'm not patently against the idea that you want me to show/prove that I can write unit tests and that I know how to do it. IMHO, the best "test" to determine someone's ability to write unit tests is to give them an already finished block of code, and then say to them, "Please write some unit tests to validate this code."

What I'm against is the idea that you're gonna give me some kinda timed test, where I'm building new features or fixing legacy features - and then, once I've done everything according to your communicated instructions, you want me to turn around - within the original "tight" timeframe - and do a whole bunch of "test writing" on top of it.

In my experience, this just fosters a lot of "false negatives".

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

Yup to all.

Not to talk of the hassle of the technical interviews in the process being one of the main reasons for most people not to apply for open positions till they are (literal translation of a Spanish idiom) more burned than a hippie's motorcycle. 😂😅

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis • Edited

is one of the main reasons for most people not to apply for open positions

YES!!!

This was actually one of the central points of my first article on this site (dev.to/bytebodger/your-coding-test...). Companies believe that they want to attract "top devs". But guess what? Most of those "top devs" are... already employed. And if they already have a decent gig in-hand (even if they're considering new opportunities), they probably won't be bothered to jump through all your hoops. (Meaning that you're left with a lotta applicants who are... less than "top" devs.)

I've rarely been in this position in my life. Thanks to the mass layoffs at top tech companies (in my case, I was at Amazon), I've recently had a lot more time on my hands and I've been doing something that I rarely do: complying with many companies' coding-challenge requests.

But when I'm in my "normal" mode? Well, let's just say that I've told many companies before, "Thanks, but no thanks" when they reach out to me and we have some solid initial conversations - but then they want me to complete a big chunk of "challenges" or "demo apps".

Collapse
 
gypsydave5 profile image
Info Comment hidden by post author - thread only accessible via permalink
David Wickes

Believe me when I say that I don't wanna work on your crappy dev team anyway.

Stranger on the internet, on this one point we are aligned: I don't want you working on my dev team too.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

Of course you don't.

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

I see your points, and both agree and disagree.

Writing good unit tests is actually quite a specific developer skill to have. I've seen plenty of developers who could crank out code that did the job, but couldn't write tests for that same code.

I've also had coding segments for interviews (take home tests) that included writing tests. Now, I was also actually failed on one of the secret requirements, because I was using a slightly older version of a language parser and was missing a specific feature they wanted (still irks me as I'd explicitly mentioned that and what I'd have done differently in a more recent version in a readme).

However, if the candidate is only given the instruction to write tests for their own code, and are given a reasonable timeframe, then I think the request is reasonable.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

I don't totally disagree with you, although you highlighted my main concern when you shared your own story of failing one of these evaluations. And that's really my whole point.

It's not that I would tell anyone that tests are not important. It's just that, when someone gives me a task that they SWEAR is "easy' - but it has a buncha built-in tests attached to it, I almost always run into snags trying to satisfy their pre-written tests.

I would propose the following: If tests are very important to your dev methodology (and I'm not claiming they they shouldn't be), then give the candidate a separate task that just asks them to write quality tests against some pre-written code. But if you want me to write up some new to-do app, and you believe this should only be a quick task to prove my coding skills, then do not pile on a buncha pre-written tests designed to compare my new code to what you think should be in the solution. That approach is extremely problematic.

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

I disagree. Writing tests isn't just about writing tests, it's also about writing code that can be tested. I don't want a candidate that writes code that's very difficult to test, so making that a part of the original task is actually quite important.

My negative story isn't my whole experience with tests in interviews. One live interview I had contained a unit test that I had to write code for. That one was fairly simple, and I had the code for it written before the interviewers had finished leaving the room.

Whenever I've given interviews, I tend not to throw tests at the candidates. While there is a take home portion sometimes, it doesn't include tests (because I only use that as one way to get an understanding of a candidates abilities), and the in-person part is mostly open ended questions (although some of that is about code).

Collapse
 
kwstannard profile image
Kelly Stannard

This is such an alien sentiment to me. 1) The only times I have ever heard tests as part of a coding exercise is back when I was running the coding exercise at Casper I used visible tests run during the exercise for real time acceptance. 2) When I am the interviewee and TDD the coding exercise then the interviewer acts like I grew another head and am speaking in Borg-ese.

Collapse
 
panditapan profile image
Pandita

Take my unicorn and my follow.

Love your rants!

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

LOL - thanks!

Collapse
 
xtofl profile image
xtofl

(Disclaimer: employed for 20 years - not recently submitted to unduly tough coding interviews.)

Great post. Herecy. I'd probably want you in my crappy team since you show you able to

  • understand the assignment
  • implement it to spec
  • analyze and fix a broken test in an existing code base
  • understand that this is part of our world
  • reflect on this and write a meaningful post about it

I would be more interested in witnessing how a candidate is setting up their 'inner development loop': how can we build in speed and correctness from the start. I know how simple this is in Python, and I have experienced how hard it is to add that to an existing code base. So when I see one struggling to proceed, manually stepping through their code, refusing the help of known-good tools and practices, I have to raise my brow.

That's why I would greatly appreciate a candidate to start with a unit test. Not to blindly evaluate if they made it in time, but to assess their desire and capability to integrate tools to deliver a quality product.

Collapse
 
mtrantalainen profile image
Mikko Rantalainen

I think unit tests should be used evaluations but the unit tests should be already ready when the tested individual has arrived.

That would test both if the applicant is able to write the implementation and correctly fix edge cases tested by the unit test.

Of course, that will require tester to write high quality unit tests for the task. Preferably something that passes mutation testing, too.

If you want to test the ability to write tests, that should be a task where you're given specification in English and you write the test without writing the implementation. That is, it should be tested fully separately from writing the implementation.

And the way to test the unit test is mutation testing. If the test catches ALL mutants, it's good.

Collapse
 
fabribertani profile image
Fabricio Bertani

If they required you to write some new features to their codebase as a "code challenge", I'm so sorry to tell you that they would never hire you, they want you to do a free job for them, which they cannot or do not want to do.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

I totally get where you're coming from. But in this case I must say that the app I was given was still, clearly, a "demo app". But yeah, I totally get your point.

Also, I should point out that I actually got a job offer from them. And honestly? This surprised me. Because normally I find that the most onerous coding challenges never result in a job offer - no matter how well I complete the challenge.

Collapse
 
maephisto666 profile image
Raffaele Briganti • Edited

This is the first article that I'm reading in your series. I must say, well written, some solid arguments. The thing I liked the most is the pragmatic touch you gave to the whole story.
Everything can be reduced to misalignment: between tests and requirements; between documentation and code; between recruiters and development teams. And the story goes on.
What I don't also get is "solve this in 45 minutes". Who decided that 45 mins is the right time? It's like when you see something like "if the API call fails, try to call it 3 times". Who decided 3 is the right number? I always think it's because 3 is the perfect number and it's very close to pi (3.14). Unless someone is hijacking a plane and you are the only React developer that is able to develop a widget to save the passengers, i think that if you complete your stuff in 50 mins it should be ok as well, right?

Last but not least. Had i team, i would gladly work with you.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Yes, this is a great point. The time restrictions are almost always arbitrary. And they almost always start from the assumption that you won't run into any hurdles.

To be fair, I do understand that, if I give someone a coding challenge that most devs finish in an hour, and the current candidate does it correctly, but... he finishes it in 12 hours, then maybe that's a bad sign. In other words, it makes some sense to communicate some type of time limit to the candidates. Because, at a certain point, they've probably blown the assessment, even if the code they're writing is correct. But the 45-minute metric is silly and arbitrary.

To complete the challenge, I had to create 3 new files (containing 45 LoC). I also had to add/update ~100 LoC code strewn across 10 existing code files. And that was all supposed to be done in... 45 minutes.

I do think that part of the inspiration for these arbitrary time limits is to misdirect potential candidates. Companies don't wanna admit that they're foisting a massive coding challenge on you. They swear that this "little" challenge will only take you 45 minutes! And what's unreasonable about that? But that's like me showing my dentist a mouth full of cavities and then insisting that this should only take 45 minutes to fix.

Some comments have been hidden by the post's author - find out more