DEV Community

Cover image for Codility === Sadness
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on • Updated on

Codility === Sadness

My very first article on this site (https://dev.to/bytebodger/your-coding-tests-are-probably-eliminating-some-of-your-best-candidates-de7) was about the sheer ridiculousness of many "coding tests". So this rant is nothing new from me. But I just had to take a few minutes to tell you how much I despise Codility. If you're a hiring manager and you think that Codility is giving you a better measurement about who is a "real" programmer... you need to re-evaluate your premises.


Image description

Some background information

I've been coding now for a quarter century. I'd never claim - not even for a second - that I've figured it all out. Nor am I an expert in "all things coding". But let's just say that I have extreme confidence in my ability to crank out vast quantities of high-quality code in brief periods of time.

So maybe you'd think that I have no problem with coding tests? Maybe you'd think that I churn through those tests with speed and agility? But if you think that... you'd be wrong.

When I'm confronted with a request to take a coding test - of nearly any kind - it's always a crapshoot. Sometimes... I ace the tests. Other times... I fail miserably.

Now, you might assume this is because, despite my decades of experience, I'm still not much of a "real" coder". But I see the issue much differently. Nearly every time I've "failed" a coding test, I've come away from the experience thinking that the test was, in fact, a piss-poor measurement of my true skills. If you've been in this career field for even a short amount of time, I suspect that you may feel the same.


Image description

The setup

First I'll acknowledge that, as I've highlighted in many previous articles, I rarely even bother to take coding tests. I've got a few dozen NPM packages that I've written and maintain. I have a boffo presence on GitHub. I have numerous live sites, with openly-accessible code, that I maintain on the web. I have a great CV site (that, in its own right, is a full-fledged React app). I've written more than 100 articles on this site solely on the subject of software engineering. At the risk of sounding even more arrogant than I already am, the simple fact is that I have a really solid internet presence that showcases my coding skills.

Furthermore, I usually have a really good job in-hand. So when some recruiter calls me and says that they have a great opportunity for me to consider, and "all" I need to do is complete this little coding test to be considered, I normally tell them, "Umm... no thanks."

But every once-in-a-while, I find myself "between gigs". And when I do, I find it harder to tell someone "no" when they want me to jump through a few coding hoops for them. So that's how I found myself, once again, confronted with the Codility coding test.


Image description

The test

The test had a time limit of 2 hours and 10 minutes to complete 3 challenges. That seemed like an odd allotment of time, but whatever. I'm certainly not complaining about the time because I've seen far too many of these tests where you're given 15 minutes (or less!) to crank out what can sometimes be a tricky coding solution. So 2 hours and 10 minutes seemed more than reasonable.

My first task required me to write an SQL query. The task itself was extremely easy. But I was severely annoyed to see how long it took for each iteration of my query to run in their janky online portal.

Maybe that sounds unfair to you. Afterall, you can hardly expect online tools to run as fast as native clients. And to be fair, the first time I remember taking a Codility test - many years ago - their portal didn't seem like it was any slower than any other online coding environment that I'd encountered.

But this is 2023. Now I can flip over to something like CodeSandbox and watch my solutions compiling-and-running so got-dang fast - right in the browser - that I occasionally find it to be distracting. Yet Codility seems to be using the exact same slowwwwwww platform they were using a decade ago. That's friggin ridiculous, because those types of differences affect how people are performing on their tests.

Of course, you don't have to do all of your work directly in the Codility window. But I haven't had a need to run SQL on my local system for quite a while. So it's not as though I could simply open up my local query editor and spin up a comparable query.

Regardless, I finished the SQL exercise in about 8 minutes. Granted, this was about 6 minutes more than it would've taken me if I wasn't working inside their cheesy toolset. But I still had plenty of time to do the 2 remaining tasks and I wasn't too stressed.

I went on to the second task and finished it in about 20 minutes. It was a coding (JavaScript) challenge where I had to write a function (blah, blah, blah) that accepted (blah, blah, blah) and returned (blah, blah, blah). The wording of the instructions wasn't ideal, but neither was it horrible, and I pretty much knew straight away what I had to code. I had the second task finished in about 20 minutes, leaving me nearly 1.75 hours to finish the assessment.

Then I went on to the last task...


Image description

Shamefully obtuse wording

Here are the instructions from the last task:

There are M children, ordered from 0 to M-1, involved in a game.  
The N-th child is assigned a letter: L[N].  At the 
start the 0th child gives a note, consisting of one 
letter L[0], to the B[0]-th child.  When the N-th 
child gets the note, they add their letter L[N] 
to the note and pass it to B[N].  The game is over when 
the 0th child gets the note.  Find the final note.

Write a function:

function getNote(L, B);

that accepts String L and an array of integers B, both of 
equal length, and returns a string with the final note
given to the 0th child.

Examples:

1. Given L = "cdeo" and B = [3, 2, 0, 1], the function 
returns "code".  The 0th child gives a note "c" to the 
3rd child.  Next, the 3rd child gives the note "co" to 
the 1st child.  The 1st child gives the note "cod" to 
the 2nd child.  After adding the letter 'e' to it, 
the 2nd child gives it to the 0th child.  The final 
note, given to the 0th child, is "code".

2. Given L = "cdeenetpi" and B = [5, 2, 0, 1, 6, 4, 8, 3, 7], 
the function returns "centipede".

3. Given L = "bytdag" and B = [4, 3, 0, 1, 2, 5], the
function returns "bat".  NOTE: not all letters 
from L must be used.

M is an integer from [1... 1,000];
String L consists only of lowercase letters (a-z);
B is all integers within range [0... M-1];
L and B are both of length M.
Enter fullscreen mode Exit fullscreen mode

First of all, the opening paragraph is an embarrassing, unnecessarily-obtuse word jumble. And to be frank, Codility does this all the dang time. I'm pretty sure that their tech writers are all frustrated, failed physics majors. I believe that they think they're being incredibly precise by using all this A[0]-th person, K-th, N-1 type of terminology. But what they're really doing is just creating a confusing mess.

I've been coding professionally now for 25 years and I've never had anyone give me specs for any task that were worded anything like this crap. This isn't testing your ability to write good code. Nor is it testing your ability to understand complex requirements. It's testing your ability to decipher what they believe is their clever riddle.

For me at least, I actually struggled to figure out from these instructions how I was to determine the "K-th" person. I read and re-read the instructions, and the examples, but it just didn't make any sense to me. (The fact that we're even talking about a "K-th person" should tell you that we're dealing with absolutely crappy specs.) Yes, I could look at the examples to see what I should get from those arrays, but it still wasn't clear from the instructions exactly how I should come up with that magical "K-th person".


Image description

The (inevitable) objections

After cranking out more than 100 articles on this site, I've learned that there will always be a certain population of tech curmudgeons who will snarkily dismiss almost anything I write. I could write that "Bugs are bad" and someone in the comments will chime in dismissively with something like "If you were a good developer, you wouldn't mind bugs." So here are some of the objections I imagine will bubble up from my commentary above:

I didn't have any problem understanding the instructions.

Good for you. You get a cookie. My point isn't that the instructions are utterly incomprehensible to everyone. I'm sure that some people will read those instructions and immediately know what's being asked of them. My point is that they are worded in a way that's unnecessarily obtuse.

I've worked through numerous Codility challenges before. They're almost always written in this exact same style. Sometimes, I read the instructions and I immediately "get it". It just... clicks in my head. And I can swiftly set about coding the solution.

The problem is that my ability to grok the instructions feels almost... random. Sometimes they're immediately clear to me. And other times I find myself reading (and re-reading) the instructions and thinking, "Umm... What???" If your instructions aren't quickly understandable for a vast majority of coders, then your instructions suck.

You're just butt-hurt because you failed the test.

Umm... no. I got 100% on the evaluation. I figured it out. But figuring out exactly what I was supposed to code was unnecessarily complex.

Sometimes requirements are complex, and coders must be able to handle complex requirements.

In a quarter-century of writing code I've cranked out some amazingly complex solutions. And sometimes, the requirements were, indeed, complex. Confusing, even. But there's a huge difference between complexity versus clarity. Just because something's complex doesn't mean that it can't be described in a way that's clear. If the requirements you hand me are extremely difficult to understand, that's not a "me problem". That's a "you problem".

Of course, in real life, if I'm handed requirements that I don't fully understand, I will absolutely go back to the person/group who formulated those requirements and ask (nay, demand) clarity. I'm not going to start cranking out a coding solution until I'm certain that I understand what's being asked of me. But when you're doing an automated online coding evaluation, you have no such ability to get clarity on those requirements.

Complex solutions may require complex instructions.

Again, just because the solution is complex doesn't mean that the instructions must be confusing. If you can't clearly describe the problem you're trying to solve, then you should rework your instructions until they are clear.

Besides, the task above is not a "complex" task. It only feels complex because the instructions were written in a confusing manner. The solution consisted of... SEVEN lines of code. And no, I'm not talking about seven lines that involve regular expressions and nested reducers and arcane JavaScript constructs. I'm talking about seven lines of dead-simple code. For such a simple "ask", there's no reason why the instructions need to be anywhere near that complex.

Codility has to provide challenges that have minimal ambiguity for a mass audience - hence the overly-precise mathematical language that they use.

I'm sorry, but this is just plain bunk. I do understand that they want to eliminate ambiguity in their instructions. But you can do that without resorting to phrases like the A[0]-th person. Removing ambiguity by increasing confusion defeats the whole purpose of evaluating someone's coding skills.

There's also another aspect we should consider here if we're trying to eliminate ambiguity: How many coders are not native English speakers?? If a US-born native English speaker like myself has problems parsing through those instructions, how do you think it feels if your native language is Hindi or Portuguese or Mandarin or any language other than English?

If you have any decent experience in dev shops, you know that non-native English speakers are extremely common. And you also know that many of those non-native English speakers are some of the best coders. So why would you peg your assessment of their skills to their ability to parse English instructions written in obtuse mathematical lingo?


Image description
Honestly? Codility isn't even close to the worst provider of online coding assessments. And if you've read any of my previous rants about coding tests, you know that I find nearly all of them to be a joke. But that doesn't mean that Codility isn't committing some serious effery here.

When you're giving someone a cursory coding test - the kind of test that simply establishes that, yeah, this person is a "real" coder - you shouldn't be relying on any verbiage that muddies up that central question. Codility could do much better. But I really don't think they care to.

Top comments (22)

Collapse
 
silviaespanagil profile image
Silvia EspaΓ±a Gil

I was presented with one of this for Swift. And it was INCREDIBLY outdated to a level that my real question was if thats the actual slack of the company...which surprisingly was not. So there I was answering UIKit questions when the stack was 100% SwiftUI.

Also I must add that the compiler does not get somethings so if you want to create a method to call many times it gives an error, but if you duplicate the method code everything gets a green check

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

These are great points. Yes, some of their tools are horribly outdated. And yes, I've also found scenarios where their system would FAIL something that actually worked perfectly.

Collapse
 
hungluong profile image
Hung Luong

I am glad that a seasoned developer like you find the same problem with Codility as I did. I am self-taught with very little math background, and the only time I deal with N and k was in high school - in my native tongue of course. Reading these ciphers almost made me quit right there before l I took the time to break them down into human language.

Also hard agree with your point on online coding test. I am still very sore about that one time I got graded 54% of the thing I was doing daily and live on production systems

Collapse
 
andrewharpin profile image
Andrew Harpin

That last question screams of being written by an academic who thinks they are being clear by writing that way, but unfortunately have never worked in a real company or understand how good developers actually think.

Nor have they had to actually write useful requirements.

They've resorted to bodgy pseudo-code, but written in the manner they teach... Badly!

Collapse
 
skyjur profile image
Ski • Edited

How would you rewrite it better? Honest question.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

If you look through the rest of the comments, I answered the same question for someone else.

Collapse
 
raibtoffoletto profile image
RaΓ­ B. Toffoletto

I'm glad to see I'm not the only one that find their wording an absolute garbage... It takes me sometimes 10 minutes just to have a notion of is being asked.

However, for me the worse part of these tests is that sometimes you need to start guessing the unit test implementation! Once at codility I almost failed a simple react test that asked me to implement a simple Intersection Observer that WORKED in real browsers but somehow was failing their unit test.... 30 minutes spend to debug their code, because it didn't reflect a real browser.

Collapse
 
coderdoofus profile image
CoderDoofus

I agree. It's psychotic to combine (a) a vague problem statement with (b) hidden unit tests for which you can't ever see the pass/fail results.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

YES. Sooooo much THIS. I had the exact same experience at Hackerrank as well. Code that performs wonderfully on my local NodeJS server. But, bafflingly, fails in their test environment.

Collapse
 
glemec profile image
Gabriel Leme

This has just the right amount of salt in it! hahaha

Jokes appart, i completly agree with you. It feels like the wording is complicated just because.
And, as a non-native english speaker, i can affirm that it's way more difficult to understand.

Also, with the ChatGpt being able to solve questions like these in mere seconds (although not aways correctly) i feel like we have to change the way we evaluate coding skills.

Collapse
 
coderdoofus profile image
CoderDoofus

As a native English speaker with a PhD in computer science, I assure you that the problem isn't with the reader. The writing (at least for some problem statements) is just plain terrible.

Collapse
 
coderdoofus profile image
CoderDoofus

I created an account just to join this discussion.

For the first time, I just had an interview coding test via Codility. Coding up my solutions (C++20) was pretty easy.

But for my final, highest-weighted Task, the problem statement write-up was attrocious. Their terminology was inconsistent and unidiomatic. Guessing at what they meant left me with only enough time to sketch out a solution as pseudocode, which I'm hoping the hiring manager will find acceptable.

Seriously, the only justification for that final Task problem statement would be as a prelude to a behavioral interview. E.g., "Tell us about a time when you had to deal with ambiguous requirements and a constrained schedule and the inability to get clarification while being evaluated for your performance."

Collapse
 
fonmagnus profile image
Arnold Ardianto

You're keep mentioning that we can "remove ambiguity" or "provide clarity without being too complex" or "without using K-th person" etc.

If you were the problem writer, how would you rewrite the problem you mentioned above by using your principle? Give us some example

Collapse
 
bytebodger profile image
Adam Nathaniel Davis • Edited

It's funny you mention that. Directly after I wrote the article, I thought, "I wonder if I should've provided a literal example of a better way to word it?" I even took the time to word a putative improvement, but I didn't go back in and add it to the article cuz I didn't want it to become a super-long read. But since you've asked, here's my first take at it:

A group of zero-to-many people is standing in a line, playing a 
game.  Each person is assigned a letter.  Each person is also 
assigned a position in the line to which they must pass their 
letter. At the beginning, the 1st person passes their letter, 
which becomes the message, to the position in line to which 
they must pass.  The person receiving the message then appends 
their assigned letter to the message, and passes it to the 
position in line to which they must pass.  The game ends when 
the first person receives the message.  Find the final message. 

Write a function:

function solution(assignedLetters, mustPassTo);

that given a string `asssignedLetters` and an array of integers 
`mustPassTo` (both of equal length), returns a string denoting 
the final message received by the 1st person.
Enter fullscreen mode Exit fullscreen mode

Is that a perfect reimagining of the instructions? Maybe not. It could probably still be clearer. But the point is that it can be wayyyy clearer than that ridiculous word-jumble that they provided.

Some people might look at that description and think, "But this gives away too much in the description about how the solution should be coded." To which I'd say, I don't care. If the whole "challenge" in your coding test is to see if I can simply comprehend the opaque language that you're using to explain the task, then it's a crappy excuse for a coding challenge.

When you're working as a coder, you need to bring a ton of "higher level" thinking to the job if you ever expect to be good at it. But if that "higher level" thinking is required merely to understand what's being asked of me, then that's not a problem with the coder. It's a problem with the way that the organization defines/communicates specs.

I don't care how crazy-overly-complicated your environment is. If the people who need you to submit coding solutions can't even explain, in clear and common language, exactly what it is that they want you to do, then that represents a severe problem in that organization. You don't solve that problem by throwing obtuse explanations at a coding team and simply assuming that they'll be able to decipher your jargon. In fact, I'd argue that if you can't explain the task in layman's terms, you may not have a good understanding yourself of exactly what you're asking someone to do.

Also, I'll reiterate that the clarity of these instructions is actually far more important in an automated test than it is in a typical working environment. In a typical working environment, if I'm even slightly confused by the request, I can always go back to the PM / stakeholder / client / etc. and ask them for greater clarity. But you have no way to do that when you're taking an automated online test. For this reason, it's just downright ridiculous if you're asking someone to complete a coding test with these kinds of hard-to-decipher instructions.

Collapse
 
skyjur profile image
Ski • Edited

I wouldn't consider it "word jumble" as IMHO it's perfectly clear and there is really not a single complicated word use albeit it is indeed dense and thus it takes effort to parse it. Being much shorter it still carries more information. For instance it tells how many people there are (M) and how they are numbered (0..M-1). Your explanation doesn't make it explicit that arrays assignedLetters and mustPassTo are correlated through index that represents a person. When you're writing description for problem to be solved in any choice of languages (including C and possibly even MatLab) it might be necessary to disambiguate on concepts that seemingly "everyone knows implicitly".

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

The common theme here, that I run into all the time, is that if you consider it to be clear, then you can't imagine how anyone else could not also see it as clear. It's like when there's some overly-complex chunk of code. But you wrote it, or, at a minimum, you've already had to work with it, so in your mind, it's clear. And that's great. But just because it's clear to you doesn't mean that the code is written well.

Thread Thread
 
skyjur profile image
Ski

I can agree that it may be written better or different. But seems like a case of bike shedding. Engineers are diverse bunch and write in different styles. Seems strange to me to pick on something just because you don't like their style. And author went as far as trying to insult the authors for their chosen style. Suggesting a different style and suggesting improvements I think that is sure always great but can be done without insulting original work. If taken out of context ok first paragraph is hard to understand. Yet they have provided very good examples at which point it becomes easy to understand. Later once once you have idea in your head you can go back to first paragraph and validate your idea against it and then the fact that it's short and dense actually may even be helpful.

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

OK. It's great.

Collapse
 
danielmpries profile image
Dan

This process has been validating. This is one of those tools that starts with an alignment of good intentions but breaks down fairly quickly in practice. First, a large set of the problems are not designed to solve for higher order patterns and language expertise. But rather, if you can solve a puzzle with for loops and array manipulation. Second, they're demoralizing. Imagine having all of that language expertise and clean code experience, maybe even architectural paradigms or domain experience. Practical experience that could directly applicable to the role. Only to have your candidacy reduced to a mensa puzzle in a problem domain you don't have context with.

I run enterprise leadership circles and the general argument is that FAANGs do this. I would argue that unless you're a FAANG, nobody is pounding down your door for resume clout and the organizations technical prowess on any given day can't pass the tests without notice but still provide tremendous value.

That said, if you're struggling with Codility and your brain, you're not using all of the tools at your disposal on the job. Use the AI assisted coding tools to solve the problem. I would never fault my engineers for using Copilot or ChatGPT to help them out. TBH, I encourage it, with guide rails.

Collapse
 
leonarduk profile image
Steve

I full agree with this. If there is a codility or similar assessment i am more likely to decline the opportunity. I have worked with people very good at these puzzles who basically can’t code, so i don’t think they measure anything other than ability to do codility puzzles.

Collapse
 
dakujem profile image
Andrej Rypo

If you were a good developer, you wouldn't mind bugs.... or Codility.

πŸ˜‰

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more