In this post, I'm going to talk about how I made my own chatbot for my personal web portfolio here..
When I first stumbled across the concept of RAG, I wondered how this is any different than just training ChatGPT to give answers based on data given in the prompt.
Here's why RAG is important:
- More developer control: RAG gives the developer more control over information sources and how it is presented to the user. They can restrict sensitive information and also provide the latest information to users.
- Cost Effective: RAG is much cheaper than training a model to operate in a domain-specific area.
- Use-case specific: RAG provides output based on only the context provided to it. This enables developers to create tailor-made models to only respond to domain-specific questions and not give vague responses outside the model's area of expertise.
Now let's get to the fun part - actually making a chatbot!
I started out by creating the context for my chatbot. I asked chatGPT to write a 1000-word text to train a RAG model based on my resume.
Once, I had the context, I used the chatOpenAI API from the langchain/openai library to define my model. I decided to go with the gpt-3.5-model.
I created a prompt asking the LLM to answer questions as if it were an AI version of me, using the data given in the context. I played around with the temperature and prompt for a bit until I finally got satisfactory results.
This was a fun project that taught me about RAG architectures and gave me hands-on exposure to the langchain library too.
Make sure to check out my website and try the chatbot for yourself here!
Top comments (1)
I found it interesting!