AI is changing the world as we know it, and for developers, embracing it can significantly boost productivity. It helps you ship new features faster, write test cases for you, and even find vulnerabilities in your code.
The internet offers many tools, but finding the right one can take time and effort. So, I have compiled a list of AI tools to help you become a better developer.
1. SWE-Kit š: Opensource head-less IDE for Coding agents
As a developer, I have always wanted to build customized AI tools to let me chat with the codebase, automate pushing changes to GitHub, and ship new features automatically. Honestly, I couldnāt find a single tool until this.
SWE-Kit is a headless IDE with features like LSPs, Code Indexing, and Code RAG. It offers a flexible runtime, which can run on any Docker host or remote server alongside specialized coding toolkits.
These toolkits include integrations with platforms like GitHub, Jira, and Slack, as well as tools such as file search and code indexing, which Composio powers.
The coding agent built with SweKit has scored an impressive 48.60% on the verified SWE bench.
This comprehensive benchmark includes some real-world GitHub issues from popular libraries like Django, Scikit-learn, Flask, Sympy, etc.
It is compatible with all the major LLM frameworks like LangChain, CrewAI, Autogen, and LlamaIndex.
You can build and deploy your own.
- GitHub PR Agent: This is used to automate the review of GitHub PRs.
- SWE Agent: You can build an SWE agent to write features, unit tests, documents, etc, automatically.
- Chat with Codebase: You can build a tool for chatting with any remote or local codebase using the code indexing tool.
Install swekit
and composio-core
to get started quickly.
pip install compsio-core swekit
Install any framework of your choice.
pip install crewai composio-crewai
Now, letās create a Coding agent with GitHub access.
composio add github
Generate a new agent scaffolding.
swekit scaffold crewai -o swe_agent
Run the agent.
cd swe_agent/agent
python main.py
This uses Docker as the default workspace environment. For more, see the documentation.
Visit the site and show support on Product Huntā
2. Aider - The AI Pair-programmer
This is the perfect choice if you're looking for a pair programmer to help you ship code faster.
Aider lets you pair programs with LLMs to edit code in your local GitHub repository. You can start a new project or work with an existing GitHub repo.
You can get started quickly like this:
pip install aider-chat
# Change the directory into a git repo
cd /to/your/git/repo
# Work with Claude 3.5 Sonnet on your repo
export ANTHROPIC_API_KEY=your-key-goes-here
aider
# Work with GPT-4o on your repo
export OPENAI_API_KEY=your-key-goes-here
aider
For more details, see theĀ installation instructionsĀ and otherĀ documentation.
3. Mentat - A GitHub native coding agent
Mentat is an AI tool built to help you tackle any coding task from your command line.
Unlike Copilot, Mentat can coordinate edits across multiple files and locations. And unlike ChatGPT, Mentat understands the context of your project from the startāthere's no need to copy and paste!
It has a dedicated CLI tool to communicate directly with code bases and can generate and execute Python code from prompts in the terminal.
Follow the steps to run Mentat. First, create a Python virtual environment.
# Python 3.10 or higher is required
python3 -m venv .venv
source .venv/bin/activate
Clone your GitHub repository.
git clone https://github.com/AbanteAI/mentat.git
cd mentat
# install with pip in editable mode:
pip install -e .
Add OpenAI or any LLM providerās API key.
export OPENAI_API_KEY=<your key here>
Run Mentat from within your project directory. Mentat uses git, so if your project doesn't already have git set up, runĀ git init
. Then you can run Mentat with:
mentat <paths to files or directories>
For more information on Mentat, check the documentation.
Star the Mentat repository ā
4. AutoCodeRover- Autonomous Program Improvement
AutoCodeRover offers a fully automated solution for resolving GitHub issues, including bug fixes and feature additions.
By combining LLMs with advanced analysis and debugging capabilities, AutoCodeRover prioritizes patch locations to create and implement patches efficiently.
To get started, set the OPENAI_API_KEY or any other for that matter,
export OPENAI_KEY=sk-YOUR-OPENAI-API-KEY-HERE
Build and start the docker image:
docker build -f Dockerfile -t acr .
docker run -it -e OPENAI_KEY="${OPENAI_KEY:-OPENAI_API_KEY}" -p 3000:3000 -p 5000:5000 acr
Check out their official repository for more information.
Star the Auto Code Rover repository ā
5. ContinueĀ - Leading AI-powered code assistant
You must have heard about Cursor IDE, the popular AI-powered IDE; Continue is similar to it but open source under Apache license.
It is highly customizable and lets you add any language model for auto-completion or chat. This can immensely improve your productivity. You can add Continue to VScode and JetBrains.
Key features
- ChatĀ to understand and iterate on code in the sidebar
- AutocompleteĀ to receive inline code suggestions as you type
- EditĀ to modify code without leaving your current file
- ActionsĀ to establish shortcuts for everyday use cases
For more, check theĀ documentation.
6. Qodo Merge: Tool for automated pull request analysis
This open-source tool from Codium AI automates GitHub Pull request review, analysis, feedback, and suggestions. It can help you become more productive with pull requests and is compatible with other version control systems like GitLab and BitBucket.
It has both self-hosted and cloud-hosted solutions.
You will need an OpenAI API key and a GitHub or GitLab access token for a self-hosted solution.
To use it locally, install the library.
pip install pr-agent
Then, run the relevant tool with the script below.
Make sure to fill in the required parameters (user_token
,Ā openai_key
,Ā pr_url
,Ā command
):
from pr_agent import cli
from pr_agent.config_loader import get_settings
def main():
# Fill in the following values
provider = "github" # GitHub provider
user_token = "..." # GitHub user token
openai_key = "..." # OpenAI key
pr_url = "..." # PR URL, for example 'https://github.com/Codium-ai/pr-agent/pull/809'
command = "/review" # Command to run (e.g. '/review', '/describe', '/ask="What is the purpose of this PR?"', ...)
# Setting the configurations
get_settings().set("CONFIG.git_provider", provider)
get_settings().set("openai.key", openai_key)
get_settings().set("github.user_token", user_token)
# Run the command. Feedback will appear in GitHub PR comments
cli.run_command(pr_url, command)
if __name__ == '__main__':
main()
You can also use Docker images or run from the source. The documentation has more on Qodo merge.
Star the Qodo Merge repository ā
7.OpenHands: Platform for AI software developer agents
OpenHands is one of the leading open-source platforms for AI agents and a direct competitor of Devin. An OpenHands agent can build new greenfield projects, add features to existing codebases, debug issues, and more.
Recently, their agent also topped the SWE-bench leaderboard with 53%.
To start with OpenHands, you need Docker version 26.0.0+ or Docker Desktop 4.31.0+ and Linux, Mac, or WSL.
Pull the docker image and run the container.
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.12-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.12-nikolaik \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.12
After running the command above, you'll find OpenHands running atĀ http://localhost:3000.
Upon launching OpenHands, you'll see a settings modal. Select anĀ LLM Provider
Ā andĀ LLM Model
Ā and enter a correspondingĀ API Key
. You can change these anytime by selecting theĀ UI's Settings
button.
If your model is not listed, toggle the advanced mode and enter it manually.
They provide four methods for working with Agents: an interactive GUI, a command-line interface (CLI), and options for non-interactive use through headless mode and GitHub Actions. Each has its pros. For more, refer to the documentation.
Star the OpenHands repository ā
8. Cody from Sourcegraph: Coding assistant for IDEs
Cody is an open-source project from Sourcegraph designed to supercharge your coding workflow directly within your IDEāwhether it's VS Code, JetBrains, or others. Cody leverages advanced search as a coding assistant to pull context from local and remote codebases. This enables seamless access to details about APIs, symbols, and usage patterns at any scale, right from your IDE.
With Cody, you can chat with your codebase, make inline edits, get code suggestions, and enjoy features like auto-completion, all tailored to help you code faster and more effectively.
You can simply install this on your IDEs and get started. For more, check the documentation.
9. VannaAI: Chat with SQL database
I dread writing SQL queries, but at the same time, it is one of the most critical technologies in modern software development. Almost all companies heavily rely on SQL to interact with relational databases. But as they say, there is always an AI tool for it and SQL databases; it is Vanna AI.
It is an open-source tool that lets you chat with SQL databases using natural language.
Vanna works in two easy steps - train a RAG "model" on your data and then ask questions that will return SQL queries that can be set up to run on your database automatically.
Getting started with Vanna is easy. Install it using pip
.
pip install vanna
# The import statement will vary depending on your LLM and vector database. This is an example for OpenAI + ChromaDB
from vanna.openai.openai_chat import OpenAI_Chat
from vanna.chromadb.chromadb_vector import ChromaDB_VectorStore
class MyVanna(ChromaDB_VectorStore, OpenAI_Chat):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
OpenAI_Chat.__init__(self, config=config)
vn = MyVanna(config={'api_key': 'sk-...', 'model': 'gpt-4-...'})
# See the documentation for other options
You can train models using your custom data.
Depending on your use case, you may or may not need to run these vn.train
commands.
Train with DDL statements.
vn.train(ddl="""
CREATE TABLE IF NOT EXISTS my-table (
id INT PRIMARY KEY,
name VARCHAR(100),
age INT
)
""")
Asking Questions
vn.ask("What are the top 10 customers by sales?")
You will get SQL output.
SELECT c.c_name as customer_name,
sum(l.l_extendedprice * (1 - l.l_discount)) as total_sales
FROM snowflake_sample_data.tpch_sf1.lineitem l join snowflake_sample_data.tpch_sf1.orders o
ON l.l_orderkey = o.o_orderkey join snowflake_sample_data.tpch_sf1.customer c
ON o.o_custkey = c.c_custkey
GROUP BY customer_name
ORDER BY total_sales desc limit 10;
See theĀ documentationĀ for more details.
Thanks for reading. If you use any other AI tool that has helped you, comment below.
Top comments (11)
Nice article, but why the dude has his computer screen reversed. š
The cover image is AI generated.
I have never seen most of these working in real projects, though they are great for boilerplate code and auto completion.
You should give SweKit a try; it can definitely do a lot of things.
They do, if you minimize the scope but mostly as a tool as the OP suggests not as a 1:1 replacement as the AI hype bros might suggest.
SweKit looks good. I will give it a try for sure.
Thanks, John. Let me know your feedback.
Maybe posts like these convert an AI heretic like me. Haha.
Haha....that's the goal. AI Proselytization.
Interesing... ill will give a try to know if some are useful for me. Thanks! ā¤ļø
Some comments may only be visible to logged-in visitors. Sign in to view all comments.