loading...

Friday Blast #63

horia141 profile image Horia Coman Originally published at horia141.com on ・2 min read

The observability pipeline (2018) - the case for treating metrics/logs/traces data as “another dataset” rather than something different. For example, sending them first to a central collection point (Kafka or equivalent) and then dispersing them to all the “targets” that need them - logging infra, metrics and alerting, even to a data lake so they’re available as business metrics. The main idea is to stop NxM problems with this data and introduce some decoupling so it’s easier to switch infra providers later down the road.

Extended validation certificates are dead (2018) - the message is important - don’t waste money on EVs and just use a regular certificate. The rest of this (really long) pieces seems like it’s taking the piss on Comodo cybersecurity.

Whatever happened to the semantic web (2018) - remember the “Semantic Web”? It used to be a thing. But it never really caught on. What’s interesting to me is how the politics of this played out - there were a million standards before there was adoption or even a real killer usecase of the tech.

TypeScript at Google (2018) - how JavaScript and the whole “frontend thing” evolved at Google vs the rest of the world. An interesting read about the history of this thing and how the author’s team is trying to sync Google up with what everyone else uses these days. IMO Google has done this numerous times before it learned the value of open source and building platforms (so before they got serious about cloud). They were doing something way advanced way earlier than anybody else. But they didn’t talk about it, or at most released a paper or three. The world picked up on the ideas, and caught up technically, but using their own stuff. Examples abound: Hadoop vs MapReduce, HBase/Cassandra vs BigTable etc. And now GCP offers HTable interfaces atop their own infra. Thankfully they’re doing good (for them) with Kubernetes and TensorFlow.

Pixie - a system for recommending 3+ billion items to 200+ million users in real-time (2018) - an overview of a Pinterest paper about their recommendation engine. These sorts of things are the backbones of many companies - Netflix, Pinterest, Facebook etc. But they’re usually glossed over in ML courses. But they’re interesting in their own right and very tricky to get right. So this is an interesting discussion of how the Pinterest teams does it.

Posted on by:

horia141 profile

Horia Coman

@horia141

Leading the Bolt teams in Bucharest. We're working on cool products like food delivery, and interesting and challenging platforms like geo, A/B testing, user accounts and route tracking.

Discussion

markdown guide