DEV Community

DevOps Pass AI for DevOps Pass AI

Posted on

Workshop: make your first AI app in a few clicks with Python+Ollama+llama3

Hey there, you probably heard that we’re living in AI era, so good time to play with it on your local.

DevOps Pass AI has pretty good integration with Ollama, so you can install Ollama, pull necessary model (fresh llama3 in our case) and make some app to play with it.

Installing Ollama and pulling image

First of all you have to add Ollama app in DevOps Pass AI:

Ollama integration app

Now you have to click “Install Ollama” in right left panel of application actions:

Install Ollama action

After that you can refresh list of docs below and switch to “Ollama remote models”, find there “llama3”, most fresh LLAMA model from Facebook and pull it:

Pull llama3 model

Pull will require a bit of time, as model size is about 4.7Gb:

Laama3 model

Ok, now you’re ready to play with LLM models, lets write some app.

Start Ollama endpoint

Before you write anything, dont forget to start Ollama to make its API endpoint available:

ollama serve
Enter fullscreen mode Exit fullscreen mode

ollama serve

Keep that console running while you’re playing with LLM’s

Create new Python env for ollama

Lets keep our system Python clean and create new Conda env, for that you have to add Python app in DevOps Pass AI and click “Create Conda environment” action in right actions pane:

Create Conda env

Call environment “ollama” and specify following requirements.txt

ollama==0.1.8
notebook==7.1.3
Enter fullscreen mode Exit fullscreen mode

After that you have to activate Conda Shell (action in right pane, should be run only once), now you’re ready for app development.

Simple app DevOps helper

Lets write really simple script, which will receive prompt and return code.

We can use it for example to generate simple BASH scripts or Ansible playbooks or whatever. Create file app1.py:

import ollama
import sys

print(sys.argv)
stream = ollama.chat(
    model='llama3',
    messages=[
      {'role': 'user', 'content': 'Youre DevOps assistant tool, return only code and explain only if asked'},
      {'role': 'user', 'content': " ".join(sys.argv)}
    ],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)
Enter fullscreen mode Exit fullscreen mode

Now you can activate conda env and run script, lets say we want to create new plugin for DevOps Pass AI, which will list Pods in Kubernetes current context and namespace:

# Activate Conda env ollama
micromamba activate ollama

# Run app
python app1.py "generate python function list() to list Kubernetes pods, return list of dicts, with name, status and age."
Enter fullscreen mode Exit fullscreen mode

Not ideal, but could be good starting point!

List Kubernetes Pods function

App two: Chat

Ok, first example is nice, but doesnt support history, its one-shot script.

Lets improve it, we’ll write script which will start chat by name (lets say JIRA story id or any other), will keep history for context and you can return to chat at any time.

# Init Ollama:
import ollama
import os, json, sys

# Message function
def ask(msg: str):
    file_path = sys.argv[1] + '.json'
    if os.path.exists(file_path):
        with open(file_path, 'r') as f:
            messages = json.load(f)
    else:
        messages = [{'role': 'user', 'content': 'Youre DevOps assistant tool, return only code and explain only if asked'}]

    messages.append({"role": "user", "content": msg})

    stream = ollama.chat(
      model='llama3',
      messages=messages,
      stream=True,
    )
    resp = ""
    for chunk in stream:
      print(chunk['message']['content'], end='', flush=True)
      resp = resp + chunk['message']['content']
    messages.append({"role": "assistant", "content": resp})
    with open(file_path, 'w') as f:
        json.dump(messages, f)

# Check for arguments
if len(sys.argv) == 1:
    print(f"ERROR: Specify chat name as argument: '{sys.argv[0]} CHAT_NAME'")
    exit(1)

# Main loop
while True:
    print(">>> ", end="", flush=True)
    req = sys.stdin.readline()
    ask(req)
Enter fullscreen mode Exit fullscreen mode

Now you can start chat for a new story:

Ollama chat

You can interrupt it with Ctrl+C and start chat again with the same argument “STORY-1”.

Support Us, Contact Us

Give us a start, we’re kitties ;)

Give us a start on GitHub or join our community on Slack

Top comments (0)