DEV Community

Cover image for 13 Hidden Open-source Libraries to Become an AI Wizard πŸ§™β€β™‚οΈπŸͺ„
Sunil Kumar Dash for Composio

Posted on • Updated on

13 Hidden Open-source Libraries to Become an AI Wizard πŸ§™β€β™‚οΈπŸͺ„

I have been building AI applications for the past 4 years and contributing to major AI tooling platforms for a while now.

Over this period, I have used many tools and frameworks for building;

  • AI agents that actually work in the real world.
  • Tools for AI agents.
  • End-to-end RAG applications.

I have curated a coveted list of open-source tools and frameworks that will help you craft robust and reliable AI applications. πŸ”₯

Harry Potter

Feel free to explore their GitHub repositories, contribute to your favourites, and support them by starring the repositories.


1. Composio πŸ‘‘ - Build Reliable Agents 10x Faster

I have tried building many agents, and honestly, while it is easy to create them, it is an entirely different ball game to get them right.

Building efficient AI agents that actually work requires efficient toolsets. This is where Composio comes into the picture.

Composio lets you augment your AI agents with robust tools and integrations to accomplish AI workflows.

They provide native support for Python and Javascript.

Python

Get started with the following pip command.

pip install composio-core
Enter fullscreen mode Exit fullscreen mode

Add a GitHub integration.

composio add github
Enter fullscreen mode Exit fullscreen mode

Composio handles user authentication and authorization on your behalf.
Here is how you can use the GitHub integration to star a repository.

from openai import OpenAI
from composio_openai import ComposioToolSet, App

openai_client = OpenAI(api_key="******OPENAIKEY******")

# Initialise the Composio Tool Set
composio_toolset = ComposioToolSet(api_key="**\\*\\***COMPOSIO_API_KEY**\\*\\***")

## Step 4
# Get GitHub tools that are pre-configured
actions = composio_toolset.get_actions(actions=[Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER])

## Step 5
my_task = "Star a repo ComposioHQ/composio on GitHub"

# Create a chat completion request to decide on the action
response = openai_client.chat.completions.create(
model="gpt-4-turbo",
tools=actions, # Passing actions we fetched earlier.
messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": my_task}
  ]
)

Enter fullscreen mode Exit fullscreen mode

Run this Python script to execute the given instruction using the agent.

Javascript

You can Install it using npm, yarn, or pnpm.

npm install composio-core
Enter fullscreen mode Exit fullscreen mode

Define a method to let the user connect their GitHub account.

import { OpenAI } from "openai";
import { OpenAIToolSet } from "composio-core";

const toolset = new OpenAIToolSet({
  apiKey: process.env.COMPOSIO_API_KEY,
});

async function setupUserConnectionIfNotExists(entityId) {
  const entity = await toolset.client.getEntity(entityId);
  const connection = await entity.getConnection('github');

  if (!connection) {
      // If this entity/user hasn't already connected, the account
      const connection = await entity.initiateConnection(appName);
      console.log("Log in via: ", connection.redirectUrl);
      return connection.waitUntilActive(60);
  }

  return connection;
}

Enter fullscreen mode Exit fullscreen mode

Add the required tools to the OpenAI SDK and pass the entity name on to the executeAgent function.

async function executeAgent(entityName) {
  const entity = await toolset.client.getEntity(entityName)
  await setupUserConnectionIfNotExists(entity.id);

  const tools = await toolset.get_actions({ actions: ["github_activity_star_repo_for_authenticated_user"] }, entity.id);
  const instruction = "Star a repo ComposioHQ/composio on GitHub"

  const client = new OpenAI({ apiKey: process.env.OPEN_AI_API_KEY })
  const response = await client.chat.completions.create({
      model: "gpt-4-turbo",
      messages: [{
          role: "user",
          content: instruction,
      }],
      tools: tools,
      tool_choice: "auto",
  })

  console.log(response.choices[0].message.tool_calls);
  await toolset.handle_tool_call(response, entity.id);
}

executeGithubAgent("joey")
Enter fullscreen mode Exit fullscreen mode

Execute the code and let the agent do the work for you.

Composio works with famous frameworks like LangChain, LlamaIndex, CrewAi, etc.

For more information, visit the official docs, and also, for even complex examples, visit the example sections of the repository.

composio

Star the Composio.dev repository ⭐


2. Julep - Framework for building Stateful Agents

Developing AI applications, especially those requiring long-term memory, presents significant challenges.

Julep is solving for this problem. It is an open-source framework for building production-ready stateful AI agents.

They provide a built-in state management system that helps in efficient context storage and retrieval.

Context storage helps maintain conversation continuity, ensuring that interactions with the AI remain coherent and contextually relevant over time.

Get started with the following pip command.

pip install julep
Enter fullscreen mode Exit fullscreen mode

Here is how it works.

from julep import Client
from pprint import pprint
import textwrap
import os

base_url = os.environ.get("JULEP_API_URL")
api_key = os.environ.get("JULEP_API_KEY")

client = Client(api_key=api_key, base_url=base_url)

#create agent
agent = client.agents.create(
    name="Jessica"
    model="gpt-4",
    tools=[]    # Tools defined here
)
#create a user
user = client.users.create(
    name="Anon",
    about="Average nerdy tech bro/girl spending 8 hours a day on a laptop,
)
#create a session
situation_prompt = """You are Jessica. You're a stuck-up Cali teenager. 
You basically complain about everything. You live in Bel-Air, Los Angeles and 
drag yourself to Curtis High School when necessary.
"""
session = client.sessions.create(
    user_id=user.id, agent_id=agent.id, situation=situation_prompt
)
#start a conversation

user_msg = "hey. what do u think of Starbucks?"
response = client.sessions.chat(
    session_id=session.id,
    messages=[
        {
            "role": "user",
            "content": user_msg,
            "name": "Anon",
        }
    ],
    recall=True,
    remember=True,
)

print("\n".join(textwrap.wrap(response.response[0][0].content, width=100)))
Enter fullscreen mode Exit fullscreen mode

They also support Javascript. Check out their documentation for more.

julep

Star the Julep repository ⭐


3. E2B - Code Interpreting for AI Apps

If I am building an AI app with code execution capabilities, such as an AI tutor or AI data analyst, E2B's Code Interpreter will be my go-to tool.

E2B Sandbox is a secure cloud environment for AI agents and apps.

It allows AI to run safely for long periods, using the same tools as humans, such as GitHub repositories and cloud browsers.

They offer native Code Interpreter SDKs for Python and Javascript/Typescript.

The Code Interpreter SDK allows you to run AI-generated code in a secure small VM -Β E2B sandboxΒ - for AI code execution. Inside the sandbox is a Jupyter server you can control from their SDK.

Get started with E2B with the following command.

npm i @e2b/code-interpreter
Enter fullscreen mode Exit fullscreen mode

Execute a program.

import { CodeInterpreter } from '@e2b/code-interpreter'

const sandbox = await CodeInterpreter.create()
await sandbox.notebook.execCell('x = 1')

const execution = await sandbox.notebook.execCell('x+=1; x')
console.log(execution.text)  // outputs 2

await sandbox.close()
Enter fullscreen mode Exit fullscreen mode

For more on how to work with E2B, visit their official documentation.

e2b

Star the E2B repository ⭐


4. Camel-ai - Build Communicative AI Systems

Solving for scalable multi-agent collaborative systems can unlock many potential in building AI applications.

Camel is well-positioned for this. It is an open-source framework offering a scalable approach to studying multi-agent systems' cooperative behaviours and capabilities.

If you intend to build a multi-agent system, Camel can be one of the best choices available in the open-source scene.

Get started by installing with pip.

pip install camel-ai
Enter fullscreen mode Exit fullscreen mode

Here is how to use Camel.

from camel.messages import BaseMessage as bm
from camel.agents import ChatAgent

sys_msg = bm.make_assistant_message(
    role_name='stone',
    content='you are a curious stone wondering about the universe.')

#define agent 
agent = ChatAgent(
    system_message=sys_msg,
    message_window_size=10,    # [Optional] the length of chat memory
    )

# Define a user message
usr_msg = bm.make_user_message(
    role_name='prof. Claude Shannon',
    content='what is information in your mind?')

# Sending the message to the agent
response = agent.step(usr_msg)

# Check the response (just for illustrative purposes)
print(response.msgs[0].content)
Enter fullscreen mode Exit fullscreen mode

Voila, you have your first AI agent.

For more information, refer to their official documentation.

camelai

Star the camel-ai repository ⭐


5. CopilotKit - Build AI Copilots for React Apps

Look no further if you want to include AI capabilities in your existing React application. The CopilotKit lets you use GPT models to automate interaction with your application's front and back end.

It is a ready-made Copilot that you can integrate with your application or any code you can access (OSS).

It offers React components like text areas, popups, sidebars, and chatbots to augment any application with AI capabilities.

Get started with CopilotKit using the following command.

npm i @copilotkit/react-core @copilotkit/react-ui
Enter fullscreen mode Exit fullscreen mode

AΒ CopilotKitΒ must wrap all components interacting with CopilotKit. You should also start with CopilotSidebar (swap to a different UI provider later).

"use client";
import { CopilotKit } from "@copilotkit/react-core";
import { CopilotSidebar } from "@copilotkit/react-ui";
import "@copilotkit/react-ui/styles.css";

export default function RootLayout({children}) {
  return (
    <CopilotKit publicApiKey=" the API key or self-host (see below)">
      <CopilotSidebar>
        {children}
      </CopilotSidebar>
    </CopilotKit>
  );
}

Enter fullscreen mode Exit fullscreen mode

You can check their documentation for more information.

copilotkit
Star the CopilotKit repository ⭐


6. Aider - The AI Pair-programmer

Imagine having a pair-programmer who’s always helpful and never annoying. Well, now you do!

Aider is an AI-powered pair programmer that can start a project, edit files, or work with an existing Git repository and more from the terminal.

It works with leading LLMs like GPT4o, Sonnet 3.5, DeepSeek Coder, Llama 70b, etc.

You can get started quickly like this:

pip install aider-chat

# Change directory into a git repo
cd /to/your/git/repo

# Work with Claude 3.5 Sonnet on your repo
export ANTHROPIC_API_KEY=your-key-goes-here
aider

# Work with GPT-4o on your repo
export OPENAI_API_KEY=your-key-goes-here
aider

Enter fullscreen mode Exit fullscreen mode

For more details, see the installation instructions and other documentation.

aider

Star the Aider repository ⭐


7. Haystack - Build Composable RAG Pipelines

There are plenty of frameworks for building AI pipelines, but if I want to integrate production-ready end-to-end search pipelines into my application, Haystack is my go-to.

Whether it's RAG, Q&A, or semantic searches, Haystack's highly composable pipelines make development, maintenance, and deployment a breeze.

Their clean and modular approach is what sets them apart.
Haystack lets you effortlessly integrate rankers, vector stores, and parsers into new or existing pipelines, making it easy to turn your prototypes into production-ready solutions.

Haystack is a Python-only framework; you can install it using pip.

pip install haystack-ai
Enter fullscreen mode Exit fullscreen mode

Now, build your first RAG Pipeline with Haystack components.

import os

from haystack import Pipeline, PredefinedPipeline
import urllib.request

os.environ["OPENAI_API_KEY"] = "Your OpenAI API Key"
urllib.request.urlretrieve("https://www.gutenberg.org/cache/epub/7785/pg7785.txt", "davinci.txt")  

indexing_pipeline =  Pipeline.from_template(PredefinedPipeline.INDEXING)
indexing_pipeline.run(data={"sources": ["davinci.txt"]})

rag_pipeline =  Pipeline.from_template(PredefinedPipeline.RAG)

query = "How old was he when he died?"
result = rag_pipeline.run(data={"prompt_builder": {"query":query}, "text_embedder": {"text": query}})
print(result["llm"]["replies"][0])

Enter fullscreen mode Exit fullscreen mode

For more tutorials and concepts, check out their documentation.

haystack

Star the Haystack repository ⭐


8. Pgvectorscale - Fastest Vector Database

Modern RAG applications are incomplete without vector databases. These store documents (texts, images) as embeddings, enabling users to search for semantically similar documents.

Pgvectorscale is an extension of PgVector, a vector database from PostgreSQL. It can seamlessly integrate with existing Postgres databases.

If you are building an application with vector stores, this is a no-brainer.
Pgvectorscale has outperformed Pinecone's storage-optimized index (s1). And it costs 75% less.

You can install it from the source, use a package manager like Yum, Homebrew, apt, etc., or use a Docker container.

To get started with it, compile and install.

# install prerequisites
## rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
## pgrx
cargo install --locked cargo-pgrx
cargo pgrx init --pg16 pg_config

#download, build and install pgvectorscale
cd /tmp
git clone --branch <version> https://github.com/timescale/pgvectorscale
cd pgvectorscale/pgvectorscale
cargo pgrx install --release
Enter fullscreen mode Exit fullscreen mode

Connect to your database:

psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>"
Enter fullscreen mode Exit fullscreen mode

Create the pgvectorscale extension:

CREATE EXTENSION IF NOT EXISTS vectorscale CASCADE;
Enter fullscreen mode Exit fullscreen mode

TheΒ CASCADEΒ automatically installsΒ pgvector.

Create a table with an embedding column. For example:

CREATE TABLE IF NOT EXISTS document_embedding  (
    id BIGINT PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
    metadata JSONB,
    contents TEXT,
    embedding VECTOR(1536)
)
Enter fullscreen mode Exit fullscreen mode

For more information on how to use this, check out the repository.

pgvectorscale

Star the Pgvectorscale repository ⭐


9. GPTCache - Semantic Caching for AI Apps

LLMs are expensive.

If you are building an app that requires more extended conversations with chat models and do not want to max out credit cards, you need caching.

However, traditional caching is of no use here. This is where GPTCache comes into the picture.

It is a semantic caching tool from Zilliz, the parent organization of the Milvus vector store.

It lets you store conversations in your preferred vector stores.
Before sending a query to the LLM, it searches the vector store; if there is a hit, it fetches it. Otherwise, it routes the request to the model.

For more information, visit the official documentation page.

GPTCache

Star the GPTCache repository ⭐


10. Mem0 (EmbedChain) - Build Personalized LLM Apps

Mem0 provides a smart, self-improving memory layer for Large Language Models,

It lets you add persistent memory for users, agents, and sessions.
If you are building a chatbot or Q&A system on custom data, consider Mem0.

Get started with Mem0 using pip.

pip install mem0ai
Enter fullscreen mode Exit fullscreen mode

Here is how to use Mem0 to add a memory layer to Large Language Models.

from mem0 import Memory

# Initialize Mem0
m = Memory()

# Store a memory from any unstructured text
result = m.add("I am working on improving my tennis skills. Suggest some online courses.", user_id="Alice", metadata={"category": "hobbies"})
print(result)
# Created memory: Improving her tennis skills. Looking for online suggestions.

# Retrieve memories
all_memories = m.get_all()
print(all_memories)

# Search memories
related_memories = m.search(query="What are Alice's hobbies?", user_id="alice")
print(related_memories)

# Update a memory
result = m.update(memory_id="m1", data="Likes to play tennis on weekends")
print(result)

# Get memory history
history = m.history(memory_id="m1")
print(history)
Enter fullscreen mode Exit fullscreen mode

Refer to the official documentation for more.

mem0

Star the Mem0 ( Embedchain) repository ⭐


11. FastEmbed - Embed Documents Faster

Speed of execution is paramount in software development, and it is even more important when building an AI application.

Usually, embedding generation can take a long time, slowing down the entire pipeline. However, this should not be the case.

FastEmbed from Qdrant is a fast, lightweight Python library built for embedding generation.

It uses ONNX runtime instead of Pytorch, making it faster. It also supports most of the state-of-the-art open-source embedding models.

To get started with FastEmbed, install it using pip.

pip install fastembed

# or with GPU support

pip install fastembed-gpu
Enter fullscreen mode Exit fullscreen mode

Here is how you can create embedding of documents.

from fastembed import TextEmbedding
from typing import List

# Example list of documents
documents: List[str] = [
    "This is built to be faster and lighter than other embedding libraries, e.g. Transformers, Sentence-Transformers, etc.",
    "FastEmbed is supported by and maintained by Quadrant."
]

# This will trigger the model download and initialization
embedding_model = TextEmbedding()
print("The model BAAI/bge-small-en-v1.5 is ready to use.")

embeddings_generator = embedding_model.embed(documents)  # reminder this is a generator
embeddings_list = list(embedding_model.embed(documents))
 # You can also convert the generator to a list, and that to a Numpy array
len(embeddings_list[0]) # Vector of 384 dimensions
Enter fullscreen mode Exit fullscreen mode

Check out their repository for more information.

FastEmbed

Star the FastEmbed repository ⭐


12. Instructor - Structured Data Extraction from LLMs

If you have played with LLM outputs, you know it can be challenging to validate structured responses.

Instructor is an open-source tool that streamlines the validation, retry, and streaming of LLM outputs.

It uses Pydantic for Python and Zod for JS/TS for data validation and supports various model providers beyond openAI.

Get started with the Instructor using the following command.

npm i @instructor-ai/instructor zod openai
Enter fullscreen mode Exit fullscreen mode

Now, here is how you can extract structured data from LLM responses.


import Instructor from "@instructor-ai/instructor";
import OpenAI from "openai"
import { z } from "zod"

const oai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY ?? undefined,
  organization: process.env.OPENAI_ORG_ID ?? undefined
})

const client = Instructor({
  client: oai,
  mode: "TOOLS"
})

const UserSchema = z.object({
  // Description will be used in the prompt
  age: z.number().describe("The age of the user"), 
  name: z.string()
})

// User will be of type z.infer<typeof UserSchema>
const user = await client.chat.completions.create({
  messages: [{ role: "user", content: "Jason Liu is 30 years old" }],
  model: "gpt-3.5-turbo",
  response_model: { 
    schema: UserSchema, 
    name: "User"
  }
})

console.log(user)
// { age: 30, name: "Jason Liu" }
Enter fullscreen mode Exit fullscreen mode

For more information, visit the official documentation page.

instructor

Star the Instructor ⭐


13. LiteLLM - Drop-in Replacement for LLMs in OpenAI Format

Let's be honest; we all have screamed at some point because a new model provider does not follow the OpenAI SDK format for text, image, or embedding generation.

However, with LiteLLM, using the same implementation format, you can use any model provider (Claude, Gemini, Groq, Mistral, Azure AI, Bedrock, etc.) as a drop-in replacement for OpenAI models.

They also support load-balancing, fallbacks, and spend tracking across 100+ LLMs.

Install LiteLLM using pip.

pip install litellm
Enter fullscreen mode Exit fullscreen mode

Here is how you can use the Claude-2 model as a drop-in replacement for GPT models.

from litellm import completion
import os

# LiteLLM with OpenAI Models

os.environ["OPENAI_API_KEY"] = "your-API-key"

response = completion(
  model="gpt-3.5-turbo",
  messages=[{ "content": "Hello, how are you?","role": "user"}]
)

# LiteLLM with Claude Models
os.environ["ANTHROPIC_API_KEY"] = "your-API-key"

response = completion(
  model="claude-2",
  messages=[{ "content": "Hello, how are you?","role": "user"}]
)
Enter fullscreen mode Exit fullscreen mode

For more, refer to their official documentation.

litellm

Star the LiteLLM ⭐


Do you use or have built some other cool tool or framework?

Let me know about them in the comments :)

Top comments (41)

Collapse
 
mlamina profile image
Marco Lamina

Amazing list! Had never heard of E2B, will check it out.

I've been working on PR Pilot, a CLI / API / lib that interacts with repositories, chat platforms and ticketing systems to help devs avoid context switching.

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash • Edited

It looks fantastic, and I will check it for sure.

Collapse
 
piero_savastano_5a2a7c7aa profile image
Piero Savastano

In my agency we are using a lot lately the Cheshire Cat AI, it has a wordpress like plugin system and is already dockerized

For the rest we are planning to ditch langchain in favour of llama-index

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Sounds interesting. Is there any specific reason for favouring LlamaIndex over LangChain?

Collapse
 
piero_savastano_5a2a7c7aa profile image
Piero Savastano

Langchain has become a little messy, too many nested classes, documentation is really hard to search

Thread Thread
 
sunilkumrdash profile image
Sunil Kumar Dash

That makes sense. It's getting messierβ€”too much abstractions.

Collapse
 
shricodev profile image
Shrijal Acharya • Edited

This cover image is the best one I have seen on Dev so far! 😻 BTW, what did you use for this?

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Thanks, Shrijal. It was done in Luma AI by an awesome designer.

Collapse
 
david00112 profile image
David

Good list, composio is pretty cool also. I gave you a star!

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

thanks, David. It sure is.

Collapse
 
ijindal1 profile image
ijindal1

Thanks for mentioning Julep.

Julep is actually more than a framework - it's a managed backend. Kind of like Firebase or Supabase for AI. It ships with a few main pieces -
i. memory (user management)
ii. knowledge (built-in RAG, and context management)
iii. tools (integration with Composio & others)
iv. tasks (Coming soon)

Really excited to see more feedback on what everyone thinks :)

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Thanks for mentioning the additional details, @ijindal1.

Collapse
 
jaybear profile image
Jens • Edited

Once again such special list of possibilities at which I've to ask myself:
"Where to begin this yourney?" πŸ™„πŸ˜
Thank You for sharing this post! πŸ’–

Collapse
 
jaybear profile image
Jens • Edited

May be I've found my first choice: πŸ€”
Retrieval-Augmented Generation with "7. Haystack" and the Gutenberg-text looks very interesting! 😎

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Haystack is pretty good, check their blogs and examples to get started.

Collapse
 
nevodavid profile image
Nevo David

Great list!

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Thanks, Nevo.

Collapse
 
uliyahoo profile image
uliyahoo

What an awesome list, thanks for mentioning CopilotKit!

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Thanks, @uliyahoo; CopilotKit is a great tool.

Collapse
 
martinbaun profile image
Martin Baun

I love the piece!
I am curious about setting up agentic workflow with instructor.

Have you set up agentic workflows?

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

I think Instructor uses OpenAI SDK, so it should be possible. You can see a bunch of agentic workflow with OpenAI examples here: github.com/ComposioHQ/composio/tre.... By the way, is there any specific use case in your mind?

Collapse
 
martinbaun profile image
Martin Baun

Not necessarily, had just been on my mind of late :)

Collapse
 
zand profile image
Zane

dev.to/zand/discover-the-magic-of-... It will be better to combine with searxng.

Collapse
 
mishmanners profile image
Michelle Duke

Great content.

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Thanks, Michelle.

Collapse
 
abhishekbhakat profile image
Abhishek

Please add Fabric

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

It looks interesting, for sure.

Collapse
 
jayantbh profile image
Jayant Bhawal

Nice GIFs!

Collapse
 
mohan_garadi_d98108af4fbc profile image
mohan garadi

Nice list! Genius tools! Thanks

Collapse
 
sai_ram_e27d6e00795a07f72 profile image
Sai Ram

Is there anyone out there who can teach prompting strategies for noobs like me

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Just start talking to LLMs like a human.πŸ˜„

Some comments may only be visible to logged-in visitors. Sign in to view all comments.