DEV Community

Cover image for Enabling AI to Boost Coding Productivity
Muhammad Azis Husein
Muhammad Azis Husein

Posted on

Enabling AI to Boost Coding Productivity

In the artificial intelligence (AI) era, nearly every sector of life could be assisted by a machine, including the software engineering sector. In the past, when encountering programming problems, we would often browse solutions on Stack Overflow or similar sources. With the advent of large language models (LLMs), the way we seek solutions has shifted. Engaging in discussions and brainstorming with AI would be the closest approach when encountering any problems, whether they're related to programming or otherwise.

There may be pros and cons to using AI to assist with programming tasks. However, in this case, I would rather not discuss the pros and cons. It's your right to choose based on your own considerations. But here, I want to share a step by step guide to enabling and using AI to boost our productivity.

Installing Ollama

  1. Go to Ollama download page here

  2. Download according to our operating system (for linux, we could try to fetch using curl -fsSL https://ollama.com/install.sh | sh)

  3. Install it and wait until it's active

Ollama Starter Page

NB: After it's activated, it might suggest that we pull the first model. We could skip this step if we want to pull another model. In my case, I don't use the suggested model. Instead, I pull another model.

Installing Model

  1. We could find any open sources models on Ollama library here

  2. After we decide which models we want to install, just pull it on our terminal. Exmaple: ollama pull dolphin-mistral:latest (I try to pull dolphin mistral model with the latest version) -- note: it might takes a bit long depends on our bandwidth

  3. Error: max retries exceeded: ... if the error like this appear, it means our connection was disconnected. Just try to pull it again, It will continue the last percentage instead of start from zero

  4. After succeed, try to run it with ollama run dolphin-mistral (adjust the model name according the model we pull before)

At this time, we already could discuss with the AI model inside our terminal. Even when we go offline, it still could be accessed because it's already on our local machine. But, to optimize our productivity, let's make it connected to our IDE our code editor. In this case, Because I wanna connect it to JetBrains editor (IntelliJ, PyCharm, WebStorm), I'll use Continue plugin. As far as I know, it also could be integrate to VSCode. But you could use another plugin like Twinny or otherwise.

Installing Plugin and Connect to IDE

  1. Follow the steps to download on the plugin pages (in my case: Continue Repo)

  2. Open the IDE (I tried using IntelliJ)

  3. Open the Settings and go to Plugins

  4. Find Install plugin from disk

  5. Choose the file we download before

  6. Wait it until installed then apply

  7. After we restart our IDE, the feature will appear on the right side bar (or somewhere else on different editor)

  8. For the first time, the default model that used might be the free trial version of some LLMs (GPT, Claude, Mixtral, etc). To use the model that we had before, we could go to config.json to add our model (click on plus + button and find Open Config on the models options)

  9. Add our model to the models list on the first index (also we could remove another models if we want)

Example:

{
    "models": [
        {
          "title": "Ollama Dolphin",
          "provider": "ollama",
          "model": "dolphin-mistral:latest"
        },
        ...
    ],
    ...
}
Enter fullscreen mode Exit fullscreen mode
  1. Save it and restart our IDE, then we could discuss with our model inside our IDE and even ask our model to modify our code
  • Muhammad Azis Husein

Top comments (0)