DEV Community

trannguyenhung011086
trannguyenhung011086

Posted on

Build a small Q&A bot with OpenAI

Recently I registered a developer account with OpenAI to play around cause it is quite popular these days and I am curious to see what I can use it for. Technically I am not able to understand how OpenAI is doing all those magic behind the scene, I feel it is very useful and time effective when using it properly for a business purpose on a high level.

So taking inspiration from a course on Educative (https://www.educative.io/courses/build-open-ai-applications-using-javascript) which is a very nice introduction and gives some basic ideas of how we can use OpenAI with JavaScript, I have just made a new repo to implement a simple application to parse and analyze documents as a Q&A bot.

Some use cases I can think of:

You are a headhunt and you want to have a quick scan of a candidate's resume to get his/her most notable skills.
You are a customer support and you want to upload a Knowledge base document to let users easily ask questions for it.
Actually, OpenAI can process many kind of documents. But for a better learning scope, this project focuses on parsing resume pdf files only.

Folder Structure

/my-monorepo
│
├── /apps
│   ├── /backend    # Express.js backend
│   │   ├── src
│   │   │   ├── index.ts
│   │   │   └── routes.ts
│   │   ├── package.json
│   │   ├── tsconfig.json
│   │   └── .env
│   └── /frontend   # React.js or any frontend framework
│       ├── src
│       │   ├── index.tsx
│       │   └── App.tsx
│       ├── package.json
│       └── tsconfig.json
├── package.json    # Monorepo-level package.json with workspace settings
└── .gitignore
Enter fullscreen mode Exit fullscreen mode

Basically, I use a mono repo approach with nom workspace to develop backend and frontend parts separately. And eventually they will be deployed as separate services just like in real world projects.

Backend

The backend is an Express.js app which expose an endpoint POST /api/retrieve-answer to pass the question for the document to multiple methods from langchain library to retrieve the final answer.

Overall flow will be like this:
Image description

The structure is like a basic Express app with middleware, route handlers, commands, etc. to handle the above workflow.

├── src
│   ├── assets
│   ├── commands
│   ├── env.ts
│   ├── index.ts
│   ├── methods
│   ├── middlewares
│   ├── repos
│   ├── routeHandlers
│   ├── routes.ts
│   ├── services
│   └── types
Enter fullscreen mode Exit fullscreen mode

And for demo purpose, I do not use a real data store to store files persistently like Postgres. Instead those files are stored as public assets and exposed as static files via Express server.

Another thing is to avoid consuming too much OpenAi credits when invoking their API, I implement a simple LRU cache in memory createLRUCacheProvider to cache the answer to return for known questions.

export function createLRUCacheProvider<T>({ ttl, itemLimit }: LRUCacheProviderOptions): LRUCacheProvider<T> {
  const cache = new Map<string, CacheEntry<T>>();

  return {
    has: (key: string) => {
      const entry = cache.get(key);

      if (!entry) {
        return false;
      }

      if (entry.expiry < Date.now()) {
        cache.delete(key);
        return false;
      }

      // Move accessed item to the end (most recently used)
      cache.delete(key);
      cache.set(key, { ...entry, expiry: Date.now() + ttl });
      return true;
    },
    get: (key: string) => {
      const entry = cache.get(key);

      if (!entry) {
        return undefined;
      }

      if (entry.expiry < Date.now()) {
        cache.delete(key);
        return undefined;
      }

      // Move accessed item to the end (most recently used)
      cache.delete(key);
      cache.set(key, { ...entry, expiry: Date.now() + ttl });

      return entry.value;
    },
    set: (key: string, value: T) => {
      const entry = cache.get(key);
      if (entry) {
        cache.delete(key);
      }

      // Get the first item (least used)
      if (cache.size >= itemLimit) {
        const lruKey = cache.keys().next().value;
        if (lruKey) cache.delete(lruKey);
      }

      cache.set(key, {
        value,
        expiry: Date.now() + ttl,
      });
    },
  };
}

const lruCache = createLRUCacheProvider({
  ttl: 86_400_000 * 3, // 3 days
  itemLimit: 50,
});
Enter fullscreen mode Exit fullscreen mode

Frontend

Frontend will be a HTML web page, on which we will render a form where the user can provide questions and click the "Get answer!" button. Once we get the answer from OpenAI using the PDF document, we will render the answer on the web page with a button to copy the answer to the clipboard. This is built with React and Vite.

Components:

  • ProfileList: display a list of candidate's profiles to select. The list will be fetched on page load, and the first record is used as default selection.
  • PDFViewer: render the PDF file for selected profile.
  • QuestionForm: render a form for user to input questions for the selected profile.

And the UI is something like this:
Image description

Full source code can be found at https://github.com/trannguyenhung011086/resume-analyzer

There are still many improvements to make for this project like:

  • Add an endpoint to allow uploading pdf files
  • Add a controller to store pdf files on Postgres
  • Add methods to store questions and answers on Postgres
  • Use Redis to cache answers from OpenAI
  • Use rate limiter to avoid spamming calls to the endpoint
  • Use server push event or websocket to stream answer response to frontend

Overall, this is a fun learning time to scratch the surface of OpenAI usage even though I haven't had a real project to use it yet. But embracing it will definitely open new opportunities in the future.

Top comments (0)