Coding interviews are hard and sometimes stressful. I recall vividly the interview process I did while interviewing at Google, who are notorious for their intense interviews schedules. I had 6 interviews in one day and a further 3 on another. I was so tired by the end of each day that I was making mistakes and forgetting words. (I was given an offer but declined to take it)
I've been fortunate to have found confidence in what I do, and have (mostly) overcome my performance anxiety. I have my PhD supervisor to thank for that, as he'd run weekly seminars in which we'd all give talks to the group; sometimes about or work, but sometimes on impromptu topics that we hadn't prepared for. I hated it at first, but it helped me realised that I would be fine as long as I knew the subject I was talking about.
Having been through many coding interviews as a candidate, and having both conducted and designed them for recruitment in my own team, the question of how we do technical assessments is often top of mind. Especially after a recent email exchange with a candidate who flat out accused us of discriminatory and anti-diversity hiring practices for wanting to do a live coding exercise. I can see the reasoning here, but it's harder for me to see the solution.
I've experimented with different formats for our technical assessment in the past, such as take-home tests; going over a candidate's past projects with them; live-coding exercises in systems like Coderpad; as well as physical whiteboarding. None of these are ideal, and I'm very aware that some people will find these more challenging than others. For what it's worth, I try to make allowances for the possibility of interview nerves. I've occasionally recommended a further interview (usually different format) if I think that the candidate could do better. While this doesn't undo the challenges or fully level the playing field, nevertheless, I'm proud to say that on two past occasions those candidates who would have been rejected prior to that extra interview were subsequently hired, and found success in their roles.
On the flip side, we also often find candidates who are unambiguously extremely unqualified for the role, despite being able to talk very convincingly about the technology in previous interviews. Often such candidates are unable to complete the first task during a live-coding exercise, and it wasn't because of interview anxiety.
For me, the pros and cons for the different kinds of technical assessment are:
Pros: Closer to real-world work, and therefore if done correctly, can be a better reflection of a candidates potential on-the-job performance
Cons: Tends to be longer-form, requiring a few hours to complete, and time to review; many candidates will actively decline to do the test if they have other jobs or offers on the table; this can turn out to be the prohibiting factor in doing take-home tests.
Also subject to cheating as we can't be sure who's doing the work (though a live review of the code can help with that).
Pros: If given the option to look up stuff on Google or documentation, then this is still a close approximation of real-world work compared with whiteboarding sessions. And gives good insight into how a candidate thinks of solving a particular problem, and is more time-efficient than take-home test.
Cons: Can be subject to interview anxiety. Practical time limitations prevent going into depth and advanced topics
Pros: Can tackle more abstract questions such as architectural design and algorithms without spending a lot of time on the mechanics of integration work.
Cons: Can be subject to interview anxiety. Less representative of real-world work where one might still successfully Google for the solutions.
Reviewing existing project
Pros: Allows diving into a larger project that the candidate has done without taking too much time. Time-efficient for the candidate.
Cons: Requires a project to have been done already with the right scope. Harder to compare performance across candidates as the project will vary. Harder to do by the interviewer.
My question to you fine people of this community is: What do you think is the technical assessment method that best balances the need to be fair on the candidate, with the need of the recruitment team to make an accurate assessment of the candidate's abilities and avoid mis-hiring? Is it one of the above options? Or something different entirely?
Top comments (2)
A live coding review of a take home test with a max of 2 and a half hours. Also, not trying to give the typical leetcode questions that will most likely never apply to the job, especially for frontend. Instead, give problems that really simulate real world web development problems with the intended use of some algorithms. Personally, I just find the typical leetcode questions discriminatory on the socioeconomic level, compared to how SATs are currently not being accepted by US universities because they are more beneficial to the upper or middle class. Unfortunately, it can also be unintentionally discriminatory on the basis of race/ethnicity since many minorities in the US don't live the middle class life with adequate educational resources. This is changing, but slowly. Sorry for the rant but I just did a ridiculous leetcode-like code signal test for a dropbox apprenticeship. I'm just reflecting on how the US public school system employed too many teachers that didn't care much to prepare students or me to solve real world problems. Let me know your thoughts! Also, I'm surprised no one commented on this yet...
I'd agree - most leetcode questions aren't reflective of real-world work either, and the fact that the advice given is often "practice these questions" and not "get real-world experience" shows that ability to pass these tests is largely an indicator as to whether you practice these tests. It's a bit like IQ tests - if you do a lot of IQ tests, you get good at IQ tests, but nobody thinks you should go practice IQ tests to get better at every-days skills