DEV Community

Cover image for MedInsight: Your AI rare disease research assistant.
Teddy ASSIH
Teddy ASSIH

Posted on

MedInsight: Your AI rare disease research assistant.

This is a submission for the Open Source AI Challenge with pgai and Ollama

What I Built

MedInsight is an AI-powered knowledge management tool tailored for med students, doctors and researchers studying rare diseases. This application consolidates research articles into one highly searchable database allowing medical professionals to quickly retrieve relevant information about rare diseases. At the moment, the data is about Acute Lymphoblastic Leukemia.

Demo

Live link here MedInsight

Repo here: MedInsight Repo

DISCLAIMER: The requests are very slow (15-20 seconds) due to all the RAG logic relying on the database and the timescale server not having enough ressources so my apologies🙏.

The home page

Homepage

First question:

first question

Second question:

second-question

Tools Used

The architecture:

MedInsight-architecture

  • Timescale:

I used a pretty simple schema that works well for storing, searching, and embedding research papers in a structured way for retrieval-augmented generation (RAG) and similarity.

CREATE TABLE research_papers(
id BIGINT PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY
title TEXT NOT NULL,
content TEXT NOT NULL,
url TEXT NOT NULL
);
Enter fullscreen mode Exit fullscreen mode
  • Syncing and creating automatically LLM embeddings for the research papers:

The pgvector and pgvectorscale extensions allowed me to store vector embeddings in my database. The pgai Vectorizer builds on top of these extensions to automatically create and synchronize embeddings for any text data. Also, I made use of the vectorscale index automatically created after 100,000 rows of vector data are present using the StreamingDiskANN index on Timescale Cloud for a blazing fast search experience⚡️.

With one line of code, I defined a vectorizer that creates embeddings for the research papers in a table:

SELECT ai.create_vectorizer( 
    'public.research_papers'::regclass,
    embedding => ai.embedding_openai('text-embedding-3-small', 1536, api_key_name=>'OPENAI_API_KEY'),
    chunking => ai.chunking_recursive_character_text_splitter('content'),
    formatting => ai.formatting_python_template('title: $title url: $url content: $chunk')
);
Enter fullscreen mode Exit fullscreen mode
  • The RAG:

To implement the RAG, pgai LLM functions come in handy. I used them to implement the RAG system directly in my database by creating a function and calling it in my queries. openai to embed the search and llama3 from a self hosted Ollama instance from Koyeb to generate a summarized answer.

Final Thoughts

Overall I was fascinated about how easy it was to setup the vector database and the automatic embedding. It for sure saves a lot of time in the implementation of ai based apps. Even if I had some small bugs throughout the coding journey, I managed to ship the app within a day so the timescale experience is pretty straight up.

Improvement

Of course my app is not perfect and there is a lot to improve:

  • Query optimization to imrpove the speed of the requests

  • A chat history to not loose conversations

Any other improvement idea is welcome in the comments 🙏.

Prize categories:

Open-source Models from Ollama, Vectorizer Vibe, All the Extensions

Image from vecteezy.com

Top comments (0)