DEV Community

Cover image for 9 cutting-edge open-source tools to build next-gen AI apps šŸ”®šŸ’”
Ayush Thakur for Composio

Posted on

9 cutting-edge open-source tools to build next-gen AI apps šŸ”®šŸ’”

I have been building and working with AI applications for over years. While building any AI application, I use multiple third-party tools and libraries to simplify the whole development process.

In this blog, I have curated 9 super-handy but lesser-known tools that I use to ease out the work.

Feel free to explore and try out these tools for your projects.

Image description


1. ComposiošŸ‘‘- AI integrations and tooling platform

Image description

Imagine having the power to build your own AI agents and seamlessly integrate them into tools like Discord, Trello, Jira, or Slack. Thatā€™s what Composio is!

Itā€™s an open-source platform that makes adding AI functionality to your applications easy and accessible. Whether you're customizing an AI agent or plugging it into your favourite tools, Composio has you covered.

Some use cases of Composio:

  • Build Coding agents to optimize the code present in the Github repository
  • Build AI bots for your Slack channels and Discord servers that can autonomously interact with the users and respond to their queries.
  • AI agent to provide a summary of reports or documents

Get started with Composio.

pip install composio-core
Enter fullscreen mode Exit fullscreen mode

Add a GitHub integration.

composio add github
Enter fullscreen mode Exit fullscreen mode

Composio manages user authentication and authorization for you.

Hereā€™s an example to automatically star a GitHub repository using Composioā€™s GitHub integration:

from openai import OpenAI
from composio_openai import ComposioToolSet, App

openai_client = OpenAI(api_key=OPENAI_API_KEY)

# Initialise the Composio Tool Set
composio_toolset = ComposioToolSet(api_key=COMPOSIO_API_KEY)

## Step 4
# Get GitHub tools that are pre-configured
actions = composio_toolset.get_actions(actions=[Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER])

## Step 5
my_task = "Star a repo ComposioHQ/composio on GitHub"

# Create a chat completion request to decide on the action
response = openai_client.chat.completions.create(
model="gpt-4-turbo",
tools=actions, # Passing actions we fetched earlier.
messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": my_task}
  ]
)
Enter fullscreen mode Exit fullscreen mode

Use this Python code to create an AI agent that automatically star a GitHub repository.

Check out the ComposioĀ docsĀ to learn more. Explore more advancedĀ examplesĀ built using Composio.

Star the Composio Repoā­


2. Letta - Build stateful LLM Applications

Image description

Letta is your go-to platform for building smart, stateful applications powered by large language models (LLMs). It's like giving AI systems memory so they can deliver long-term, personalized, and context-aware results.

Plus, itā€™s open-source and model-agnostic, that means you can integrate any LLM and enhance it with custom tools, long-term memory, and external data sources. Itā€™s like giving your AI a brain that remembers!

You can build:

  • Personalized chatbotsĀ ****that require long-term memory
  • AI agents that are connected to any custom tools
  • AI Agents to automate the workflow like managing the emails, etc

Get started with Letta

Install the Letta library.

pip install letta
Enter fullscreen mode Exit fullscreen mode

You can use the Letta CLI to create an agent.

letta run
Enter fullscreen mode Exit fullscreen mode

It will start creating an AI agent.

Run the letta server to access this newly created AI agent.

letta server
Enter fullscreen mode Exit fullscreen mode

Letā€™s send a message to this agent.

from letta import create_client 

# connect to the server
client = create_client(base_url="http://localhost:8283")

# send a message to the agent
response = client.send_message(
    agent_id="agent-09950586-6313-421c-bf7e-2bba03c30826", 
    role="user", 
    message="hey!! how are you?"
)
Enter fullscreen mode Exit fullscreen mode

With letta server, an Agent Development Environment (ADE) can be created to create and manage agents.

Check out the Letta docs to learn more.

Star the Letta Repoā­


3. Rasa - Build Conversational AI Experiences

Image description

Rasa is an open-source platform that makes building advanced conversational AI applications a breeze. With Rasa, you can create intelligent chatbots and virtual assistants that truly understand natural language.

It gives you all the tools and infrastructure you need to develop, train, and deploy smart, context-aware AI assistants that feel genuinely human.

Itā€™s super easy to get started with Rasa.

Use this repo to create a codespace.

Add your API keys.

RASA_PRO_LICENSE='your_rasa_pro_license_key_here'
OPENAI_API_KEY='your_openai_api_key_here'
Enter fullscreen mode Exit fullscreen mode

Load these environment variables from your file.

source .env
Enter fullscreen mode Exit fullscreen mode

Activate your python environment.

source .venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

The codespace is ready; you can now use Rasa. To build your first CALM assistant, check out the tutorial.
Check out the Rasa docs to learn more.

Star the Rasa Repoā­


4. Taipy - Build AI Web applications

Image description

Meet Taipy, the open-source Python library designed to simplify the development process and keep things user-friendly. Taipy offers tools to build data-driven web applications.

It offers tools to simplify the development of workflows, dashboards, and other data-driven applications. Taipy is primarily used to create end-to-end solutions that involve machine learning models and interactive user interfaces, making it easier to deploy AI applications.

Hereā€™s how you can start with Taipy.

Install the Taipy library.

pip install taipy
Enter fullscreen mode Exit fullscreen mode

Now, create the Graphical User Interface (GUI) using Taipy.

Import the Taipy library along with other necessary libraries.

from taipy.gui import Gui
import taipy.gui.builder as tgb
from math import cos, exp
Enter fullscreen mode Exit fullscreen mode

Create the utility functions

value = 10

def compute_data(decay:int)->list:
    return [cos(i/6) * exp(-i*decay/600) for i in range(100)]

def slider_moved(state):
    state.data = compute_data(state.value)
Enter fullscreen mode Exit fullscreen mode

Build the GUI using Taipy methods.

with tgb.Page() as page:
    tgb.text(value="# Taipy Getting Started", mode="md")
    tgb.text(value="Value: {value}")
    tgb.slider(value="{value}", on_change=slider_moved)
    tgb.chart(data="{data}")

data = compute_data(value)

if __name__ == "__main__":
    Gui(page=page).run(title="Dynamic chart")
Enter fullscreen mode Exit fullscreen mode

Finally, use the following command to run this file.

taipy run main.py
Enter fullscreen mode Exit fullscreen mode

Check out the Taipy docs to explore more. Also visit the other tutorials of Taipy.

Star the Taipy Repoā­


5. Flowise - Simplify the creation of AI-driven workflows

Image description

FlowiseAI, the open-source platform that makes creating and deploying AI-driven workflows a whole lot simpler. Itā€™s built with data scientists and developers in mind, helping you manage machine learning models, automate processes, and integrate AI seamlessly into business applications. From designing to scaling workflows, FlowiseAI has got you covered.

It allows users to easily automate and manage workflows that involve model training, deployment, and inference.

Get Started with Flowise.

Clone the Flowise Repository

git clone https://github.com/FlowiseAI/Flowise.git
Enter fullscreen mode Exit fullscreen mode

Navigate to the Flowise directory.

cd Flowise
Enter fullscreen mode Exit fullscreen mode

Install the necessary dependencies.

pnpm install
Enter fullscreen mode Exit fullscreen mode

Build the code.

pnpm build
Enter fullscreen mode Exit fullscreen mode

Finally, run the app.

pnpm start
Enter fullscreen mode Exit fullscreen mode

To learn more about Flowise, visit their documentation.

Star the Flowise Repoā­


6. WandB - Fine-tune your AI model

Image description

Say hello to Weights & Biases (WandB), your go-to tool for tracking and managing machine learning experiments. It makes monitoring your models, comparing versions, and analyzing performance super easy. With WandB, you can create reports, share results with your team, and organise your work in one place.

Itā€™s perfect for building better AI models faster and collaborating more effectively.

Hereā€™s how you can use WandB in your projects.

Install the WandB library.

!pip install wandb
Enter fullscreen mode Exit fullscreen mode

Login to Wandb.

wandb.login()
Enter fullscreen mode Exit fullscreen mode

Youā€™ll need to provide API key to successfully login. Generate API Key here.

Initialize the WandB object in your Python script.

run = wandb.init(
    # Set the project where this run will be logged
    project="my-awesome-project",
    # Track hyperparameters and run metadata
    config={
        "learning_rate": 0.01,
        "epochs": 10,
    },
)
Enter fullscreen mode Exit fullscreen mode

Hereā€™s the whole code:

# train.py
import wandb
import random  # for demo script

wandb.login()

epochs = 10
lr = 0.01

run = wandb.init(
    # Set the project where this run will be logged
    project="my-awesome-project",
    # Track hyperparameters and run metadata
    config={
        "learning_rate": lr,
        "epochs": epochs,
    },
)

offset = random.random() / 5
print(f"lr: {lr}")

# simulating a training run
for epoch in range(2, epochs):
    acc = 1 - 2**-epoch - random.random() / epoch - offset
    loss = 2**-epoch + random.random() / epoch + offset
    print(f"epoch={epoch}, accuracy={acc}, loss={loss}")
    wandb.log({"accuracy": acc, "loss": loss})

# run.log_code()
Enter fullscreen mode Exit fullscreen mode

Now, you can navigate to the WandB dashboard and see the metrics.

Learn more about WandB at their documentation.

Star the Wandb Repoā­


7. Ludwig - Build custom AI models

Image description

Ludwig, the open-source deep learning tool built with Python to make machine learning accessible to everyoneā€”even if youā€™re not a coding expert. With its simple interface, you can build, train, and deploy models effortlessly, so you can focus on results without worrying about the complex details under the hood.

Get started with Ludwig.

Install the Ludwig library.

pip install ludwig
Enter fullscreen mode Exit fullscreen mode

Import the necessary libraries.

config = {
    "input_features": [
        {
            "name": "sepal_length_cm",
            "type": "number"
        },
        {
            "name": "sepal_width_cm",
            "type": "number"
        },
        {
            "name": "petal_length_cm",
            "type": "number"
        },
        {
            "name": "petal_width_cm",
            "type": "number"
        }
    ],
    "output_features": [
        {
            "name": "class",
            "type": "category"
        }
    ]
}
model = LudwigModel(config)
data = pd.read_csv("data.csv")
train_stats, _, model_dir = model.train(data)
Enter fullscreen mode Exit fullscreen mode

Load a model.

model = LudwigModel.load(model_dir)
Enter fullscreen mode Exit fullscreen mode

Capture the predictions.

predictions = model.predict(data)
Enter fullscreen mode Exit fullscreen mode

And your LLM model is created successfully.

Want to learn more? Checkout the Ludwig docs.

Star the Ludwig Repoā­


8. Feast - Feature Store for Production ML

Image description

Feast, the open-source feature store built specifically for machine learning applications. It acts as a platform for storing, managing, and serving features (data attributes) for your ML models. With Feast, you get an efficient access to data, whether youā€™re training or serving models.

It can be integrated seamlessly into ML workflows, allowing data scientists and engineers to access features in both training and production environments.

Get started with Feast.

pip install feast
Enter fullscreen mode Exit fullscreen mode

Create a Feast repository for your project.

feast init my_project
cd my_project/feature_repo
Enter fullscreen mode Exit fullscreen mode

Generate training data on which your project will be trained.

from datetime import datetime
import pandas as pd

from feast import FeatureStore

# Note: see https://docs.feast.dev/getting-started/concepts/feature-retrieval for 
# more details on how to retrieve for all entities in the offline store instead
entity_df = pd.DataFrame.from_dict(
    {
        # entity's join key -> entity values
        "driver_id": [1001, 1002, 1003],
        # "event_timestamp" (reserved key) -> timestamps
        "event_timestamp": [
            datetime(2021, 4, 12, 10, 59, 42),
            datetime(2021, 4, 12, 8, 12, 10),
            datetime(2021, 4, 12, 16, 40, 26),
        ],
        # (optional) label name -> label values. Feast does not process these
        "label_driver_reported_satisfaction": [1, 5, 3],
        # values we're using for an on-demand transformation
        "val_to_add": [1, 2, 3],
        "val_to_add_2": [10, 20, 30],
    }
)

store = FeatureStore(repo_path=".")

training_df = store.get_historical_features(
    entity_df=entity_df,
    features=[
        "driver_hourly_stats:conv_rate",
        "driver_hourly_stats:acc_rate",
        "driver_hourly_stats:avg_daily_trips",
        "transformed_conv_rate:conv_rate_plus_val1",
        "transformed_conv_rate:conv_rate_plus_val2",
    ],
).to_df()

print("----- Feature schema -----\n")
print(training_df.info())

print()
print("----- Example features -----\n")
print(training_df.head())
Enter fullscreen mode Exit fullscreen mode

Ingest batch features into your online store.

CURRENT_TIME=$(date -u +"%Y-%m-%dT%H:%M:%S")
# For mac
LAST_YEAR=$(date -u -v -1y +"%Y-%m-%dT%H:%M:%S")
# For Linux
# LAST_YEAR=$(date -u -d "last year" +"%Y-%m-%dT%H:%M:%S")

feast materialize-incremental $LAST_YEAR $CURRENT_TIME 
Enter fullscreen mode Exit fullscreen mode

Fetching feature vectors for inference.

from pprint import pprint
from feast import FeatureStore

store = FeatureStore(repo_path=".")

feature_vector = store.get_online_features(
    features=[
        "driver_hourly_stats:conv_rate",
        "driver_hourly_stats:acc_rate",
        "driver_hourly_stats:avg_daily_trips",
    ],
    entity_rows=[
        # {join_key: entity_value}
        {"driver_id": 1004},
        {"driver_id": 1005},
    ],
).to_dict()

pprint(feature_vector)
Enter fullscreen mode Exit fullscreen mode

Use a feature service to fetch online features instead.

from feast import FeatureService
driver_stats_fs = FeatureService(
    name="driver_activity_v1", features=[driver_stats_fv]
)
Enter fullscreen mode Exit fullscreen mode

Your project is ready.

Feast has a lot more to offer. Check out their docs for more.

Star the Feast Repoā­


9. ONNX Runtime - Production-grade AI engine to speed up training

Image description

Meet ONNX Runtime, the open-source, high-performance engine built to supercharge machine learning models in the Open Neural Network Exchange (ONNX) format. Developed by Microsoft, itā€™s your go-to tool for deploying ML models with high efficiency and speed across different platforms.

ONNX Runtime runs on multiple platforms, including Windows, Linux, and macOS, as well as edge devices and mobile platforms.

To use the ONNX, you can simply install itā€™s library.

pip install onnxruntime
Enter fullscreen mode Exit fullscreen mode

ONNX comes with the support of PyTroch, TensorFlow, and SciKit Learn.

Letā€™s use PyTorch for demo purpose.

Export the model using torch.onnx.report.

torch.onnx.export(model,                                # model being run
                  torch.randn(1, 28, 28).to(device),    # model input (or a tuple for multiple inputs)
                  "fashion_mnist_model.onnx",           # where to save the model (can be a file or file-like object)
                  input_names = ['input'],              # the model's input names
                  output_names = ['output'])            # the model's output names
Enter fullscreen mode Exit fullscreen mode

Load the onnx model withĀ onnx.load.

import onnx
onnx_model = onnx.load("fashion_mnist_model.onnx")
onnx.checker.check_model(onnx_model)
Enter fullscreen mode Exit fullscreen mode

Create inference session usingĀ ort.InferenceSession.

import onnxruntime as ort
import numpy as np
x, y = test_data[0][0], test_data[0][1]
ort_sess = ort.InferenceSession('fashion_mnist_model.onnx')
outputs = ort_sess.run(None, {'input': x.numpy()})

# Print Result
predicted, actual = classes[outputs[0][0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
Enter fullscreen mode Exit fullscreen mode

Similarly, you can use TensorFlow or Scikit Learn.

Checkout the quickstart guide to learn more.

Star the ONNX Repoā­

Thank you for reading. Do you know any other use tools? Let us know in comments.

Image description

Top comments (7)

Collapse
 
johnwoods12 profile image
johnwoods12

I have been building AI apps for many years. These tool are gonna help me a lot

Collapse
 
procoders profile image
ProCoders

Wow! Thank you!

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Nice list Ayush.

Collapse
 
time121212 profile image
tim brandom

This curation is super handy. Thank you for sharing it

Collapse
 
johncook1122 profile image
John Cook

Have personally used Composio and Flowise. They are game-changer

Collapse
 
samcurran12 profile image
Sammy Scolling

Many of them are new to me. Learnt something new today. Thanks for this list

Collapse
 
alexhales67 profile image
Alexhales67

Loved this curation. I have used many of them. Portkey and Huggingface are also an amazing tools to build AI apps

Some comments may only be visible to logged-in visitors. Sign in to view all comments.