In the realm of education, technology continues to play a pivotal role in transforming learning experiences and knowledge dissemination. One such innovation is the automation of question and answer generation, simplifying the creation of educational materials and assessments. In this article, we'll delve into the development of an application powered by Lyzr's QABot agent capabilities, enabling users to effortlessly generate questions and answers from educational content, particularly PDF files.
The Lyzr-Question Answer Generation app is a cutting-edge tool designed to assist educators and learners in generating insightful questions and comprehensive answers from educational content. Leveraging Lyzr's QABot agent, users can effortlessly extract valuable insights and enhance their learning experiences.
Lyzr
Lyzr offers an agent-centric approach to rapidly developing LLM (Large Language Model) applications with minimal code and time investment. Even if you’re unfamiliar with the GenAI stack, Lyzr empowers you to build your AI applications effortlessly. It is the go-to solution for constructing GenAI apps without requiring an in-depth understanding of Generative AI.
Setting up the Project
Setting up the Question-Answer Generator project is simple. Follow these steps to get started:
Clone the App: Clone the Question-Answer Generator app repository from GitHub.
git clone https://github.com/PrajjwalLyzr/Question-Generation
Create a Virtual Environment: Set up a virtual environment and activate it.
python3 -m venv venv
source venv/bin/activate
Set Environment Variables: Create a .env
file and add your OpenAI API key.
OPENAI_API_KEY = “Paste your openai api key here”
Install Dependencies: Install the required dependencies.
pip install lyzr streamlit
Core Components of the Question-Answer Generator App
Let's explore the key components of Lyzr's Question-Answer Generator app:
Utils Module for Common Functions
The utils.py
file in the project serves as a utility module containing common functions utilized throughout the application. It includes functions for removing existing files, retrieving files in a directory, and saving uploaded files.
import os
import shutil
from typing import Optional, Literal
import streamlit as st
from dotenv import load_dotenv; load_dotenv()
def remove_existing_files(directory):
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)
try:
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
st.error(f"Error while removing existing files: {e}")
def get_files_in_directory(directory):
# This function help us to get the file path along with filename.
files_list = []
if os.path.exists(directory) and os.path.isdir(directory):
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)
if os.path.isfile(file_path):
files_list.append(file_path)
return files_list
def save_uploaded_file(uploaded_file, directory_name):
# Function to save uploaded file
remove_existing_files(directory_name)
file_path = os.path.join(directory_name, uploaded_file.name)
with open(file_path, "wb") as file:
file.write(uploaded_file.read())
st.success("File uploaded successfully")
Interface for Large Language Model (LLM) Calling
The llm_calling
function serves as an interface for interacting with large language models (LLMs) provided by OpenAI. It facilitates text generation based on given prompts using specified parameters.
def llm_calling(
user_prompt:str,
system_prompt: Optional[str] = "You are a Large Language Model. You answer questions",
llm_model: Optional[Literal['gpt-4-turbo-preview', 'gpt-4']] = "gpt-4-turbo-preview",
temperature: Optional[float] = 1, # 0 to 2
max_tokens: Optional[int] = 4095, # 1 to 4095
top_p: Optional[float] = 1, # 0 to 1
frequency_penalty: Optional[float] = 0, # 0 to 2
presence_penalty: Optional[float] = 0 # 0 to 2
) -> str:
if not (1 <= max_tokens <= 4095):
raise ValueError("`max_tokens` must be between 1 and 4095, inclusive.")
from openai import OpenAI
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
response = client.chat.completions.create(
model=llm_model,
messages=[
{
"role": "system",
"content": f"{system_prompt}"
},
{
"role": "user",
"content": f"{user_prompt}"
}
],
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty
)
return response.choices[0].message.content
Kernel for Question Answer Generator Application
The question_generation
function initializes a QABot instance configured to perform question generation from PDF files. It encapsulates the logic for setting up and initializing the QABot, providing a streamlined interface for generating questions from PDF documents.
import os
from lyzr import QABot
from dotenv import load_dotenv; load_dotenv()
from pathlib import Path
os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')
def question_generation(path):
qa_bot = QABot.pdf_qa(
input_files=[Path(path)],
)
return qa_bot
Entry Point for the Application (app.py
)
The app.py
file defines the core functionality of a Streamlit web application for generating questions and answers from a PDF document. It sets up the sidebar, provides options for selecting the PDF file source, and implements the logic for generating questions and answers based on user input.
import streamlit as st
import os
from PIL import Image
from utils import utils
from lyzr_qa import question_generation
data = "data"
os.makedirs(data, exist_ok=True)
def rag_response(topic, path):
agent = question_generation(path)
metric = f""" You are an expert of this {topic}. Tell me everything you know about this {topic}, Provide a detailed reponse on this {topic} from the given file"""
response = agent.query(metric)
return response.response
def gpt_questions(response, topic, number):
response = utils.llm_calling(user_prompt=f"Develop some {number} numbers of questions on {response} that is clear, relevant, and specific which obeys the {topic}"
f"[!important] Consider the context and purpose of the inquiry, aiming for open-endedness to encourage discussion or exploration. Engage the audience's curiosity while ensuring the question prompts meaningful responses",
system_prompt=f"You are an expert of this {topic}",llm_model="gpt-4-turbo-preview")
return response
def gpt_answers(questions, topic):
answers = utils.llm_calling(user_prompt=f"""Provide a detailed response to the following {questions}, including relevant examples, explanations, and, if applicable, diagrams or code examples to illustrate your points. Additionally, consider discussing both the advantages and disadvantages of the topic to provide a comprehensive analysis.
Your answer should aim to offer a well-rounded understanding of the subject matter, highlighting its complexities and implications."""
, system_prompt=f'You are an expert of this {topic}', llm_model="gpt-4-turbo-preview")
return answers
def main():
st.sidebar.subheader('Lyzr- QnA Generator')
selection = st.sidebar.radio("Select Any Option: ", ["Default File", "Upload File"])
if selection == 'Default File':
st.info('Default file: Object Oriented Programming')
st.markdown(""" ##### Topics can be:
1. Inheritance
2. Polymorphsim
3. Abstraction
4. Encapsulation """)
path = './Object Oriented Programming.pdf'
user_topic = st.text_input('Enter the topic according to subject')
number_questions = st.text_input('Enter the number of questions')
if user_topic is not None:
if st.button('Submit'):
rag_generated_response = rag_response(topic=user_topic, path=path) # getting reponse from rag about the subject/topic
gpt_response = gpt_questions(response=rag_generated_response, topic=user_topic, number=number_questions) # create n number of question on rag response
gpt_answer = gpt_answers(questions=gpt_response, topic=user_topic) # create the answers for the questions
st.subheader('Questions')
st.write(gpt_response)
st.markdown('---')
st.subheader('Answers')
st.write(gpt_answer)
if selection == 'Upload File':
file = st.file_uploader("Upload a Subject Book Pdf", type=["pdf"])
if file:
utils.save_uploaded_file(file, directory_name=data)
path = utils.get_files_in_directory(directory=data)
filepath = path[0] # get the first filepath
user_topic = st.text_input('Enter the topic according to subject')
number_questions = st.text_input('Enter the number of questions')
if user_topic is not None:
if st.button('Submit'):
rag_generated_response = rag_response(topic=user_topic, path=filepath) # getting reponse from rag about the subject/topic
gpt_response = gpt_questions(response=rag_generated_response, topic=user_topic, number=number_questions) # create n number of question on rag response
gpt_answer = gpt_answers(questions=gpt_response, topic=user_topic) # create the answers for the questions
st.subheader('Questions')
st.write(gpt_response)
st.markdown('---')
st.subheader('Answers')
st.write(gpt_answer)
else:
st.warning('Please Upload subject pdf file!!!')
if __name__ == "__main__":
main()
Executing the Application
The application offers a user-friendly interface for generating questions and answers:
Default File Option: Users can select predefined topics related to Object Oriented Programming and input a topic and specify the number of questions to generate.
Upload File Option: Users can upload a PDF document containing the subject matter, input the topic, and specify the number of questions.
Upon submission, the application generates questions and answers based on the selected topic and displays them for the user.
Lyzr's Question-Answer Generator, educators and learners can streamline the process of creating educational materials and assessments. By harnessing the power of AI, users can generate insightful questions and comprehensive answers from educational content with ease. Revolutionize education today with Lyzr.ai!
Top comments (0)