The theme of this week has been LangChain. As I said last time, I was planning on converting my old script into a LangChain script.
I had the initial idea of sketching out the dataflow in Figma to see how I could try crafting the langchain but it turned out to be a little more difficult than I'd thought.
I have a major design problem of how to actually keep a rotating "inventory" to make sure the data is actually varied as it's being generated. Originally, the way I had the script, it would just do ten example user inquiries in a single shot, and then I'd ask it in a separate step to then refine those generated inquiries to make them more varied. This kind of approach works fine, naively, but it's pretty inefficient and also results in the AI using the same general pattern for every single one.
In my head, I was thinking that perhaps it would allow for some more creativity for it to actually take steps in generating the content, which was a perfect use case for LangChain, since, as the name implies, it has the ability to chain LLM prompts.
My goal then was to generate one inquiry, save it to a file, keep it in memory, then pass it back into the chain to generate another one. Every time it does this, it can use fresh eyes to evaluate what it wrote last time instead of doing it all in one shot. This is a common failing with LLMs; it will get sidetracked on generating the most "probable" answer given the prompt, and not evaluate its own responses until expressly stated.
I am also expecting that this approach may be a tad faster in terms of turnaround time to create something, but it may also take longer. I will have to do some timed trials to see.
At the moment, I am still trying to figure out how to actually get this accomplished through the code I have set up. It's definitely more difficult than usual because LangChain is new and obviously ChatGPT can't do the designing for me. I have to do it all the old-fashioned way, and it's wracking my brain.
There is also a separate closed beta platform that LangChain has called LangSmith which seems to be a vector store database thing which will make classifying and retrieving our data for our models though LangChain much easier and streamlined, but we were having trouble today getting it to actually log stuff, which is a core feature.
In other news not directly related to coding, my coworker apparently was fired due to dishonesty in how much work he was actually putting out, so that's fun. I hope we can find a replacement soon because it sucks that we have to pick up the pace even more than we already have been.
Anyway, until next week, cheers.
Top comments (0)