In the artificial intelligence (AI) era, nearly every sector of life could be assisted by a machine, including the software engineering sector. In the past, when encountering programming problems, we would often browse solutions on Stack Overflow or similar sources. With the advent of large language models (LLMs), the way we seek solutions has shifted. Engaging in discussions and brainstorming with AI would be the closest approach when encountering any problems, whether they're related to programming or otherwise.
There may be pros and cons to using AI to assist with programming tasks. However, in this case, I would rather not discuss the pros and cons. It's your right to choose based on your own considerations. But here, I want to share a step by step guide to enabling and using AI to boost our productivity.
Installing Ollama
Go to Ollama download page here
Download according to our operating system (for linux, we could try to fetch using
curl -fsSL https://ollama.com/install.sh | sh
)Install it and wait until it's active
NB: After it's activated, it might suggest that we pull the first model. We could skip this step if we want to pull another model. In my case, I don't use the suggested model. Instead, I pull another model.
Installing Model
We could find any open sources models on Ollama library here
After we decide which models we want to install, just pull it on our terminal. Exmaple:
ollama pull dolphin-mistral:latest
(I try to pull dolphin mistral model with the latest version) -- note: it might takes a bit long depends on our bandwidthError: max retries exceeded: ...
if the error like this appear, it means our connection was disconnected. Just try to pull it again, It will continue the last percentage instead of start from zeroAfter succeed, try to run it with
ollama run dolphin-mistral
(adjust the model name according the model we pull before)
At this time, we already could discuss with the AI model inside our terminal. Even when we go offline, it still could be accessed because it's already on our local machine. But, to optimize our productivity, let's make it connected to our IDE our code editor. In this case, Because I wanna connect it to JetBrains editor (IntelliJ, PyCharm, WebStorm), I'll use Continue plugin. As far as I know, it also could be integrate to VSCode. But you could use another plugin like Twinny or otherwise.
Installing Plugin and Connect to IDE
Follow the steps to download on the plugin pages (in my case: Continue Repo)
Open the IDE (I tried using IntelliJ)
Open the
Settings
and go toPlugins
Find
Install plugin from disk
Choose the file we download before
Wait it until installed then apply
After we restart our IDE, the feature will appear on the right side bar (or somewhere else on different editor)
For the first time, the default model that used might be the free trial version of some LLMs (GPT, Claude, Mixtral, etc). To use the model that we had before, we could go to
config.json
to add our model (click on plus+
button and findOpen Config
on the models options)Add our model to the models list on the first index (also we could remove another models if we want)
Example:
{
"models": [
{
"title": "Ollama Dolphin",
"provider": "ollama",
"model": "dolphin-mistral:latest"
},
...
],
...
}
- Save it and restart our IDE, then we could discuss with our model inside our IDE and even ask our model to modify our code
- Muhammad Azis Husein
Top comments (0)