DEV Community

Cover image for We built Triplebyte for machine learning engineers - here's what we learned.
Jason Blake for Triplebyte

Posted on • Originally published at triplebyte.com

We built Triplebyte for machine learning engineers - here's what we learned.

This article is credited to the Triplebyte Team.

We’re delighted to announce that we’ve just launched our brand-new machine learning track. We’ll now be helping machine learning engineers find jobs in the same way that we’ve already helped generalist, front-end, and mobile engineers.

ML is an exciting field. Machine learning, in one form or another, is driving many of the most innovative startups in Silicon Valley. On our platform alone, we have companies building autonomous vehicle fleets (Zoox), reducing food waste by predicting produce demand (Afresh), powering AI-backed materials science research (Citrine), and matching patients with mental health providers (Lyra). Nor is machine learning limited to high innovation: even basic AI assistants like automatic style matching can save hundreds of man-hours that would otherwise be spent on tedious tasks.

Companies see the value in ML, and they're building out their ML teams at a breakneck pace. It’s not just startups, either; established names like Apple have already posted roles for our ML track. Capable ML engineers are rare, and they are hot commodities - companies both large and small are eager to hire as many as they can get. We’re no exception! Triplebyte couldn't exist the way it does today without our ML team. As much as we want to make hiring more pleasant for everyone involved, we couldn’t do any of the things we do if companies didn’t trust our algorithm to match them with people they want to hire.

We’ve been hiring ML engineers ourselves for years, but we also reached out to numerous companies to ask them what they look at in their own hiring processes. Our new interview is based on a blend of the two, tweaked for consistency and the ability to pull as much information out of two hours as we possibly can. Once the ML track has been out for a few months, we'll be back with more comprehensive data - but, for now, we thought we'd share the big take-aways from our research into ML hiring. Here's what you need to know if you're in the market for a machine learning job:


First and most importantly, raw coding skills matter. This was the common theme we heard over and over again as we spoke to companies hiring for ML roles, and it makes sense. These companies want something built, not just theorized, and most of them are not working in particularly extraordinary spaces that require major innovation in terms of the basic structure of their ML models. Even our own algorithms that match engineers with companies are not in themselves revolutionary. We use relatively small tweaks on standard approaches, albeit with lots of fine-tuning by our resident experts (who were kind enough to provide many of the suggested reading links below).

Contrary to many engineers’ fears, companies would, in general, rather have someone who can code but lacks deep academic background than a Ph.D. who can’t actually spin up a model. As a result, our new ML interview begins (like our other interviews) with a coding challenge. If you’re in the process of ‘pitching’ yourself on the ML job market yourself, emphasize what you can build - not just what you know. If you’re new to the field and haven’t built anything yet, try it out! Build a basic neural network or a random forest classifier, then try training it on one of many open data sets.

Second, experience with real-world production concerns counts for a lot. ML expertise is hard to evaluate unless you’re an expert in it yourself, which many people hiring ML engineers are not. It’s especially hard to detect problems with a system already deployed, because machine learning is famously opaque. As a result, companies tend to emphasize experience as a proxy for this knowledge of production issues, because they (probably rightly) don’t feel they can evaluate ability precisely enough directly. This means that switching into ML from another field depends, even more than other engineering disciplines, on getting your foot in the door the first time. Personal projects can help, but they need to be substantial - many of the issues that come up in production ML systems don’t become obvious until they’ve scaled up or have been running a while.

We don’t like to focus on experience, as we’ve written about numerous times before. Experience is a meaningful proxy, but it is not particularly precise and only exacerbates existing divides in the industry. Our ML interview is every bit as background-blind as the interviews for our other tracks are: if you know your stuff, we’ll help you find a job even if you don’t have any experience at all. That said, we agree with many employers that knowledge of production concerns matters a lot, and we feature those concerns heavily both in our new interview and in our own hiring process. If you’d like to read more about validating and troubleshooting systems in production, check out this article on cross-validation, this article on hyperparameter tuning, or this webinar on running k-nearest-neighbors in production.

Finally, don’t forget data analysis skills. Being a data scientist doesn’t make you an ML engineer, but any good ML engineer should have some basic knowledge of data science. This goes hand in hand with the previous point, because many production problems are detectable through subtle statistical signatures. A great example of this sort of sanity checking comes from political modeler Nate Silver of FiveThirtyEight. He could easily have taken incoming polls at face value to build a model, but some moderate statistical analysis showed that the incoming data was subtly skewed by pollsters’ desires not to deviate from the crowd, which demanded model adjustments to compensate.

Companies (rightly) care about these statistical skills and the common sense to know when to apply them, and our interview contains a data analysis section to test them. If you’ve ever caught something of this sort in a production environment (whether professionally or in a hobbyist project), it’s a good idea to play it up. If you haven’t, it’s a great idea to do some reading on basic data science even outside of ML.


Whether you’re a veteran looking for a streamlined job-search process, or a newcomer trying to see how they stack up against the industry, we’d love to have you try out our new ML quiz - it costs you nothing but a few minutes of your time.

If you’re new to Triplebyte and you’d like to know more about our process, check out our main home page. The tl;dr is that we use vetted ML models to make your job hunt more rigorous and data-driven. If you do well on our interview, we’ll help you with a hassle-free job hunt that fast-tracks you right to final onsites with top companies and exciting startups (including all the companies discussed earlier in this article). If you don’t, that’s okay - we’ll give you personalized, actionable feedback about where you did well and where you can improve. The process is completely free for engineers no matter what - companies pay us because we make their hiring process more effective.

Our ML track is brand new, and we’re sure there are ways we can improve it. We’ll no doubt be making tweaks to it in the coming weeks and months as we gather more data. (In fact, we have some exciting ideas in the wings for our other interviews, as well - more to come on this later.) If you think there’s something we can do better, or something you loved and think we should keep, we’d love to hear your feedback. You can reach us at ml-feedback@triplebyte.com

Top comments (0)