Our co-founders Henrik and Johannes first met in January during a seminar at the Hasso Plattner Institute in Potsdam. It was a one-week seminar, which went from early in the morning until late at night. During that time, Johannes had just started an AI consultancy and was about to land the big first project. He was euphoric - soon, he would be able to implement a large neural network to process image data on a large scale in a real-world project.
After an in-depth discussion with Henrik, he realized he wasn’t prepared for it. It was a project deeply-rooted in Physics. The funny thing, Johannes is the person that failed almost every Physics exam in school. He had great knowledge of the latest AI frameworks and architectures and was able to write decent ETL pipelines, but he had zero knowledge about what he was meant to build.
Henrik, who studied physics before he started his master's degree, offered to help. They decided to implement the project together, and after they finished it successfully (which, looking back, is a bit of a miracle), they realized two things:
- Henrik and Johannes would make an awesome founder duo
- to implement a successful project, AI alone isn't going to help. It requires both engineers and business users (you might argue, "that is something you can read on Forbes". They did that before, but it was something completely different when they realized it hands-on in a project).
During 2020, both continued to implement projects but realized that they’d like to start building their own software. By the end of 2022 they decided to turn their consultancy into a software startup. This is how Kern AI was born.
"We know a lot about AI and have built great projects that created value for clients, but we certainly are missing lots of domain knowledge. Why not build a No-Code AI tool, and let the end user implement the AI?", share Johannes and Henrik about their thinking process.
Together, they built the first mockup in November '20, signed an agreement with a client by December '20, and developed the MVP in January '21. It was about to go into production, and Henrik and Johannes were about to witness our first SaaS client succeed, right? ... Wrong ...
Our first (failed) product: onetask
We called the SaaS onetask (you had to do one task to build the AI). By labeling data, a model was trained in the background, which you could then call in a small playground or via an API.
In February '20, both received the first feedback from the client and were shocked: The AI was just as good as random guessing. It learned nothing. And in addition, the people that labeled the data felt insecure about "building an AI". Henrik and Johannes figured out two new things:
- The AI was fed with training data that, as a data scientist, you wouldn't consider training data. It simply wasn't good enough (they did plenty of projects beforehand and faced bad data before, but as they were able to fix data issues with our technical knowledge, they didn't realize how big this obstacle would be).
- Being involved in building AI doesn't mean building the AI. The users felt insecure. But doesn't No-Code always win? Well, most often, No-Code applications result in deterministic results. Connecting your Webflow form to Hubspot via Zapier means that a new inbound lead is always sent to the CRM. But building AI means building statistical applications that produce results that are probabilistic. It's a new level of complexity.
As both were trying to figure out with the client how they could improve the AI, one developer from the client’s side asked Henrik why they don't automatically label the data via some rules and then let the users only label parts of the data. This simple statement was core to our product pivot:
- Give superpowers to technical users. Our main user shifted from a non-technical user to a developer. Understand what they require to build AI, and help them build that.
- Optimize the collaboration with end users (or generally where the technical user requires help), but have clear separations of responsibility.
At that time (the full team was still enrolled at university), Johannes heard about data-centric AI in research, a concept in which developers focus on building the training data of an AI system in collaboration with domain experts. "Jackpot, that's it!" - they looked for another early client, pitched the concept to their data science team (i.e., again, they went to our end users first), and outlined a project.
In May '21, we had the next MVP.
Early signs of the right direction
We saw that the data science team of our client had as the initial training data an Excel spreadsheet that was partially labelled years ago. Think of column A containing the raw content that should be predicted and column B (partially) containing what the model should predict. No documentation at all. Yikes.
Because of this, in the following project, our goals were:
- To give data scientists more control in building the AI
- To let domain experts collaborate actively (as we knew that this is crucial from day one)
Our MVP gave the data scientists a toolkit powering labelling automation, initially to fill out missing manually labelled data. To set up the automation, we asked the domain experts to label some data with us in a Zoom session and to speak out loud about what they were thinking as they were labelling the data.
Turns out this 2-hour session was worth a ton. Why?
- The data scientists learned more about the data itself. Of course, they weren't completely new to the field, but no domain expert said out loud before what they were thinking about the record.
- In the call, we turned the thoughts into code (think little Python snippets), and ran our software to combine the heuristics with some active learning (i.e., Machine Learning on the data labelled in the session). Seeing how the labelling was turning more and more into automation, the domain experts were excited at the end of the call, feeling they were an active and integral part of the process.
- Lastly, the data scientists had a much better foundation to build models. Their training data now contained more labels, it contained better labels (we found tons of mislabeled data in that process).
- Furthermore, the data was documented via automation and was more and more becoming part of an actual software artifact.
Ultimately, the data science team built a new model on top of the iterated training data, resulting in an F1-score raise from 72% to 80%. In non-technical terms, this means that you can trust your model much more.
We found that we were heading in the right direction. Our next question was: "what do we need to build precisely, and how can we best ship this to developers?”.
To answer the first question better than anyone else, we realized in early 2022 that we must win the hearts of developers. And this - for many good reasons - typically means via open-source.
We went open-source - version 1.0 of “Kern AI refinery”
Fast forward to July ‘22 (after many further product iterations and a full redesign), we open-sourced our product under a new name: Kern AI refinery (the origin of the name is very simple: we want to improve, i.e., refine, the foundation for building models).
We decided to fully focus on natural language processing (NLP), as we both saw refinery performing exceptionally well in NLP use cases in the past, and as we got incredibly excited about what the future of NLP might bring (this was before ChatGPT btw).
On our launch day, we were trending on HackerNews, and so we quickly gained interest from developers all over the world. From the feedback we got, we saw that refinery was moving exactly in the direction we hoped it would:
Shortly after the release, we had more than 1,000 stars on GitHub (i.e., a GH users expressing that they like the project), hundreds of thousands of views on the repository, and dozens of people telling us about the use cases they implemented via refinery. We were thrilled and started digging deeper.
This leads us to today.
Announcing our seed funding, co-led by Seedcamp and Faber with participation from xdeck, another.vc and Hasso Plattner Seed Fund
We are happy to announce that Seedcamp and Faber co-led our seed funding of €2.7m.
Our investors share our vision of bringing data-centric NLP into action and trust us in building Kern AI by focusing on the end users first. We’re thrilled to receive their support and backing and now aim to continue expanding our platform.
Doing so, we today announce the release of our data-centric NLP platform.
It is the result of our insights and efforts since we started Kern AI. What makes it stand out?
- It puts users in their roles, while also sparking collaboration and creativity. bricks (our content library) is connected with refinery (database + application logic), such that developers can turn an idea into implementation within literally seconds. Why? Because that way, devs and domain experts can validate ideas immediately.
- It is capable of doing the sprint and the marathon. Prototype an idea within an afternoon and automatically have the setup to grow your use cases over time. Just like regular software.
- You can use it both for batch data and real-time streams. Start by uploading an Excel spreadsheet into refinery, and over time grow your database via native integrations or by setting up your own data stream via our commercial API (gates).
- It is flexible. You are using crowd labelling to annotate your training data? No problem, you can integrate crowd labelling into refinery. Do you already have a set of tools? This also works, and refinery even comes with native integrations to tools like Labelstudio. The more familiar you get with the platform, the more use cases you will see. That’s what gets us excited: sparking creativity.
- It can power your own NLP product as the database. Or you can use it as the NLP API. Or you can even cover a full end-to-end workflow on it. Use cases range from building sophisticated applications up to implementing a small internal natural language-driven workflow.
Our team is genuinely excited about what comes next. We believe that NLP is just about to get started, and it will disrupt almost anything touched by technology. And we’re confident that our work will contribute to it.
Top comments (2)
lets go team Kern 🙌
This is incredibly exciting! :-)