I was writing my dev journal on Friday, but I never finished it and apparently the website never saved my draft, so I guess I'll just start over again. I'm probably going to be writing these late for the next two weeks anyway since my wedding is coming up, and then a 3-week honeymoon after that.
The week picked up from the prior week regarding LangChain. This time, instead of trying to just work from scratch to convert my old script, my boss wanted me to just create something very new and basic.
He provided me with some sample code that he had from a couple months ago, but it was horribly incomplete and took me a few days to figure out how to actually get it to output responses from the LLM.
Naturally, since this library is new, syntax is always evolving and changing, and that's what the case was here. I spent like 2 days with my coworker trying to figure out why it kept being unable to read the chat history, but eventually it turned out there was a "return_messages" parameter for the ConversationBufferMemory that needed to be enabled in order for it to be able to actually read the messages in Prompt Template form.
I don't know exactly what the parameter does to make it work, but as with most solutions, I just found that from scrounging forums online. Anyway, after that, it seemed that the bot was able to finally keep a buffer memory for the conversation and could reproduce anything that was said prior.
Next issue was to make it actually go back and forth, which I did with some simple input() commands and f strings. So it works pretty much like ChatGPT, just in a command line.
Once we got that working correctly, I conferred with my coworker to start adding Redis caching so that it can pick up on saved conversations, and after some fiddling with the exact mechanisms, that worked out too. You can close the command line script and it'll pick up on the prior conversation you had.
We do want it to actually show the chat history once it loads, but one step at a time. LangChain also has a capability to "stream" tokens so that it can display whatever is being generated in real-time, like ChatGPT, so I'd like to implement that too.
Otherwise, pretty successful week. Cheers.
Top comments (0)