DEV Community

Cover image for Question Answering and Document Analysis with LangChain and DeepInfra
Mike Young
Mike Young

Posted on • Originally published at notes.aimodels.fyi

Question Answering and Document Analysis with LangChain and DeepInfra

Welcome! Today, we're diving into the concept of Question Answering in Document Analysis. Specifically, we're looking at how a tool like LangChain can enhance this process. Question Answering (QA) in document analysis is the art and science of extracting precise information from a collection of documents in response to a specific query. With the advancement in AI and Natural Language Processing (NLP), this process has become increasingly automated, reliable, and efficient. That's where LangChain comes into play.

Overview of LangChain

LangChain, when integrated with DeepInfra, becomes a potent tool for Document Analysis. DeepInfra provides a suite of large language models (LLMs) that you can harness for various AI applications, with LangChain providing the infrastructure to link these models together in a pipeline, tailored to specific needs. You can find an extensive list of LLMs to use with LangChain at AIModels.fyi. This platform lets you search, filter, and sort AI models, making finding the right model for your AI project easier. For instance, you could utilize gpt-neo-125M, dolly-v2-12b, or flan-t5 based on your specific needs.

Setting Up DeepInfra with LangChain

Let's now set up the DeepInfra ecosystem within LangChain. This setup involves a series of steps such as setting the environment API key, creating a DeepInfra instance, setting up a prompt template for question and answer, and finally running an LLMChain. For the uninitiated, an LLMChain is essentially a chain of LLMs configured in a certain way to achieve a specific task.

Setting the Environment API Key

First, make sure to obtain your API key from DeepInfra. You'll have to login and get a new token. If you're new to DeepInfra, you'll be glad to know that you're given a 1-hour free serverless GPU compute to test different models.

Once you have your API key, you can set it in your environment:

from getpass import getpass
import os
DEEPINFRA_API_TOKEN = getpass()
os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN
Enter fullscreen mode Exit fullscreen mode

Creating the DeepInfra Instance

Next, create a DeepInfra instance using your model of choice. In this case, we are using the 'databricks/dolly-v2-12b' model.

from langchain.llms import DeepInfra

llm = DeepInfra(model_id="databricks/dolly-v2-12b")
llm.model_kwargs = {'temperature': 0.7, 'repetition_penalty': 1.2, 'max_new_tokens': 250, 'top_p': 0.9}
Enter fullscreen mode Exit fullscreen mode

Creating a Prompt Template

To streamline the question-answering process, we'll create a prompt template for the question and answer. This provides a structured approach for our queries.

from langchain import PromptTemplate

template = """Question: {question}\nAnswer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Enter fullscreen mode Exit fullscreen mode

Initiating and Running the LLMChain

Finally, we can initiate the LLMChain and run it:

from langchain import LLMChain
llm_chain = LLMChain(prompt=prompt, llm=llm)
# Provide a question and run the LLMChain
question = "Can penguins reach the North pole?"
llm_chain.run(question)
Enter fullscreen mode Exit fullscreen mode

This setup will provide you with an LLMChain ready for document analysis and question-answering tasks.

In-depth Explanation of Document Analysis and Question Answering Process

LangChain and DeepInfra make document analysis and question answering a smooth process. To further illustrate how this process works, let's dive into a detailed explanation.

Loading Documents

The first step in this journey involves loading your documents. You can use a TextLoader provided by LangChain:

from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt')
Enter fullscreen mode Exit fullscreen mode

Creating Your Index

Once the documents are loaded, create an index over your data. This index is used to efficiently retrieve relevant documents given a query, thus saving you time and resources. In LangChain, this can be done using a VectorstoreIndexCreator:

from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Enter fullscreen mode Exit fullscreen mode

Querying Your Index

Finally, query your index to fetch relevant documents. Let's say you want to know what the president said about a certain individual. You could do:

query = "What did the president say about Ketanji Brown Jackson"
index.query(query, llm=llm)
Enter fullscreen mode Exit fullscreen mode

You can also use query_with_sources to also get back the sources involved:

query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query, llm=llm)
Enter fullscreen mode Exit fullscreen mode

Additional Features and Advanced Usage

LangChain also provides advanced features such as question answering with sources, where the language model cites the documents used to generate the response. Here's how you can get started:

from langchain.chains.qa_with_sources import load_qa_with_sources_chain
chain = load_qa_with_sources_chain(llm, chain_type="stuff")
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
Enter fullscreen mode Exit fullscreen mode

Conclusion

LangChain, when integrated with DeepInfra, provides a versatile and powerful tool for question-answering in document analysis. It allows you to use AI models efficiently, easily create workflows, and make the process of understanding and extracting information from your documents a breeze. We encourage you to explore LangChain and DeepInfra, and utilize their capabilities in your applications. Happy experimenting!

Subscribe or follow me on Twitter for more content like this!

Top comments (0)