DEV Community

DJ Johnson
DJ Johnson

Posted on

I made a Discord bot with lmstudio.js!

Demo as a gif

I have been using LM Studio as my main driver to do local text generation and thought it would be cool to integrate it into a discord bot. The post is a bit long due to a lot of the setup so feel free to skip straight to the code: lmstudio-bot github

  • LM Studio https://lmstudio.ai/ allows you to run LLMs on your personal computer, entirely offline by default which makes completely private.

  • Discord https://discord.com/ is used by a lot of different communities including gamers and developers! It gives users a place to communicate about different topics

Note: For clarity I will be using "bot" when referencing the discord bot and "model" when referencing the Large Language Model from LM Studio

Setting up LM Studio

Navigate over to https://lmstudio.ai/ and install the application based on your machine

Installing a lm community image

Search for "lmstudio-community gemma" (huggingface model card) and you'll find really small models that should fit on most computers! If you're not too worried about memory, "lmstudio-community llama 3" (hugging face model card) is also a good start.

lmcommunity gemma example

Turning the server on

Go to the server section and select a model from the dropdown

LM Studio models

Installing lms cli

We are going to use this to help scaffold our project.

Run this in your terminal as described in the instructions here:



npx lmstudio install-cli


Enter fullscreen mode Exit fullscreen mode

Be sure to open a new terminal window after installation

Setting up the Discord bot

This is probably the most difficult part of the whole process.

Create a new bot here

New bot creation dialog

Find client id

After creating the bot (application) you'll see the bot ID and the bot token (if you miss the token you can reset it later in the bot section)

Application ID example

Create a server

If do not have a server created, click the plus button on the left side in discord and create a new server

Create a server image example

Turn on dev mode

This was moved in the last two months to the "Advanced" section. When it used to be under Appearance > Advanced

Dev mode location example

Grab the guildId (aka the server Id)

(You'll need to turn dev mode on to see this)

Server Id location

Add the bot to the server

Under "Installation" there is an option to create an install link (save the page after the link shows up):

Installation link example

Once you go to the url it shows, you should see a dropdown to add the bot to your new server!

Writing the code

Project setup

Now we're ready to start our project! going back to our terminal,



lms create


Enter fullscreen mode Exit fullscreen mode

lms create example

Then give your project a name like lmstudio-bot and then you can cd lmstudio-bot and open up src/index.tsx in your preferred editor

Note: @lmstudio/sdk is pre-installed in this process, if you're not using lms create then you will need this:



npm install @lmstudio/sdk


If everything went well; we should see something like this:

initial editor example

Installing discord.js and dotenv and setting up dotenv

Lets go ahead and add discord.js now.



npm install discord.js dotenv


Enter fullscreen mode Exit fullscreen mode

At the top of our project we will want to import dotenv's config



import 'dotenv/config';
import { LMStudioClient } from '@lmstudio/sdk';


Enter fullscreen mode Exit fullscreen mode

You will want a new file at the root of your project .env



touch .env


Enter fullscreen mode Exit fullscreen mode

Your .env file should contain three variables: CLIENT_TOKEN, CLIENT_ID,GUILD_ID

It should look something like this:



CLIENT_TOKEN=token_found_in_the_bot_section
CLIENT_ID=application_id_number_from_earlier
GUILD_ID=server_id


Enter fullscreen mode Exit fullscreen mode

Note: If you misplaced your bot token you can always reset it and get a new one in the bot section

and at the top of the index.ts file lets go ahead and declare these as variables



import 'dotenv/config';
import { LMStudioClient } from '@lmstudio/sdk';

// ! in typescript just says these are not null
const CLIENT_TOKEN = process.env.CLIENT_TOKEN!;
const CLIENT_ID = process.env.CLIENT_ID!;
const GUILD_ID = process.env.GUILD_ID!;


Enter fullscreen mode Exit fullscreen mode

After that we will want to do is clear out the main function, your entire index.ts file should look like this now:



import 'dotenv/config';
import { LMStudioClient } from '@lmstudio/sdk';

// ! in typescript just says these are not null
const CLIENT_TOKEN = process.env.CLIENT_TOKEN!;
const CLIENT_ID = process.env.CLIENT_ID!;
const GUILD_ID = process.env.GUILD_ID!;

async function main() {
}

main();


Enter fullscreen mode Exit fullscreen mode

Setting up Discord commands

Logging our bot into Discord



...
import { LMStudioClient, LLMSpecificModel } from '@lmstudio/sdk';
import { Client, GatewayIntentBits } from 'discord.js';

...

async function main() {
  const client = new Client({ intents: [GatewayIntentBits.Guilds] });

  client.on('ready', () => {
    console.log(`Logged in as ${client.user?.tag}!`);
  });

  client.login(CLIENT_TOKEN);
}

main();


Enter fullscreen mode Exit fullscreen mode

Logged in bot to discord

Adding a 'Ping' command to Discord

Note: In order to keep things simple, I am going to keep this a bit simpler than what they show in the discord docs.



...
import { Client, GatewayIntentBits, REST, Routes, SlashCommandBuilder } from 'discord.js';

const GUILD_ID = ...

function createDiscordSlashCommands() {
  const pingCommand = new SlashCommandBuilder()
    .setName('ping')
    .setDescription('A simple check to see if I am available')
    .toJSON();

  const allCommands = [
    pingCommand
  ];

  // Gives a pretty-print view of the commands
  console.log();
  console.log(JSON.stringify(allCommands, null, 2));
  console.log();

  return allCommands;
}

// We send our commands to discord so it knows what to look for
async function activateDiscordSlashCommands() {
  const rest = new REST({ version: '10' }).setToken(CLIENT_TOKEN);

  try {
    console.log('Started refreshing bot (/) commands.');

    await rest.put(
      Routes.applicationGuildCommands(CLIENT_ID, GUILD_ID), {
      body: createDiscordSlashCommands()
    });

    console.log('Successfully reloaded bot (/) commands.');
  } catch (error) {
    console.error(error);

    return false;
  }

  console.log();

  return true;
}

...

// // uncomment this if you want to test the slash command activation
// activateDiscordSlashCommands().then(() => {
//   console.log('Finished activating Discord / Commands');
// });


Enter fullscreen mode Exit fullscreen mode

Active discord slash command working

We can also go to our server now and see that our new command exists!

Ping command appearing in discord

Sending our slash commands to the server

We should go ahead and active the slash commands in our main function now.



...

async function main() {
  const slashCommandsActivated = await activateDiscordSlashCommands();

  if (!slashCommandsActivated) throw new Error('Unable to create or refresh bot (/) commands.');

  const client = ...
}

...


Enter fullscreen mode Exit fullscreen mode

Responding to the ping command

Currently /ping does not do anything, lets fix that!



async function main() {
  ...

  client.on('ready', ... );

  // this is for responding to slash commands, not individual messages
  client.on('interactionCreate', async interaction => {
    // if we did not receive a command, lets ignore it
    if (!interaction.isChatInputCommand()) return;

    if (interaction.commandName === 'ping') {
      await interaction.reply('Pong!');
    }
  });

  client.login(CLIENT_TOKEN);
}


Enter fullscreen mode Exit fullscreen mode

Pong response from Discord bot

That wraps up our Discord intro!

You are doing fine

Setting up LM Studio responses

Getting a model to handle the responses

There are a few different ways to get a model through the SDK, if a model is not in memory yet we could grab it manually, in this case lets just find the first available model:



const GUILD_ID = ...

async function getLLMSpecificModel() {
  // create the client
  const client = new LMStudioClient();

  // get all the pre-loaded models
  const loadedModels = await client.llm.listLoaded();

  if (loadedModels.length === 0) {
    throw new Error('No models loaded');
  }

  console.log('Using model:%s to respond!', loadedModels[0].identifier);

  // grab the first available model
  const model = await client.llm.get({ identifier: loadedModels[0].identifier });

  // alternative
  // const specificModel = await client.llm.get('lmstudio-community/gemma-1.1-2b-it-GGUF/gemma-1.1-2b-it-Q2_K.gguf')

  return model;
}


Enter fullscreen mode Exit fullscreen mode

Getting a response with lmstudio.js

Now we can set up a function that actually returns a response with that model!



import { LMStudioClient, LLMSpecificModel } from '@lmstudio/sdk';

async function getLLMSpecificModel ...

async function getModelResponse(userMessage: string, model: LLMSpecificModel) {
  // send a system prompt (tell the model how it should "act;"), and the message we want the model to respond to
  const prediction = await model.respond([
    { role: 'system', content: 'You are a helpful discord bot responding with short and useful answers. Your name is lmstudio-bot' },
    { role: 'user', content: userMessage },
  ]);

  // return what the model responded with
  return prediction.content;
}

// // uncomment this if you want to test the response
// getLLMSpecificModel().then(async model => {
//   const response = await getModelResponse('Hello how are you today', model);
//   console.log('responded with %s', response);
// });


Enter fullscreen mode Exit fullscreen mode

Note: The system message part is not required but it helps the model be more specific in its actions.

Example response from LM Studio

Tying it all together!

Adding an 'Ask' command

Here is what you've been waiting for: Now that we're all set up, lets create a new command that responds to a user's question with LM Studio!

Like we did with ping, lets create the command first:



function createDiscordSlashCommands() {
  const pingCommand = ...

  const askCommand = new SlashCommandBuilder()
    .setName('ask')
    .setDescription('Ask LM Studio Bot a question.')
    // lets create a specific field to look for our question
    .addStringOption(option => (
      option.setName('question')
        .setDescription('What is your question.')
        .setRequired(true)
    ))
    .toJSON();

  const allCommands = [
    pingCommand,
    askCommand
  ];

  // pretty-print
  ...

  return allCommands;
}


Enter fullscreen mode Exit fullscreen mode

The addStringOption allows us to specify the structure of the discord command.

Note: Our new askCommand is going to be sent with activateDiscordSlashCommands in our main function, so we do not need to do anything extra there!

If you run the code so far you'll already see /ask!

First look at ask command

Responding to the ask command with LM Studio!

Lets start off by adding our model:



async function main() {
  const model = await getLLMSpecificModel();

  if (!model) throw new Error('No models found');

  const slashCommandsActivated = ...
}


Enter fullscreen mode Exit fullscreen mode

And then responding to the command with our model:



client.on('interactionCreate', async interaction => {
  ...

  if (interaction.commandName === 'ask') {
    // this might take a while, put the bot into a "thinking" state
    await interaction.deferReply();

    // we can assume `.getString('question')` has a value because we marked it as required on Discord
    const question = interaction.options.getString('question')!;
    console.log('User asked: "%s"', question);

    try {
      const response = await getModelResponse(question, model);

      // replace our "deferred response" with an actual message
      await interaction.editReply(response);
    } catch (e) {
      await interaction.editReply('Unable to answer that question');
    }
  }
});

client.login(CLIENT_TOKEN);


Enter fullscreen mode Exit fullscreen mode

Notes:

  • interaction.deferReply();; Is needed for responses that might take a while, it also gives a "thinking" state.
  • interaction.editReply; This is needed when using deferReply, it tells the bot to stop "thinking" and finally respond

sending an ask

getting an ask response back

Final note: Language models are NOT to be taken as a source of truth! Like in this case, the acceptable answers would have been Ryan Reynolds, Chis Hemsworth, or even myself (honorable mention). Maybe a model will get trained on this article someday though and give a better answer

Annndddd We are done!

Congrats! Now we have a working bot that reads and responds to our messages!

Recap: We installed LM Studio, downloaded a model, turned on the server, turned on a developer account on Discord, created a server and got the information for the server, learned how to return responses from our model and setup and respond to slash commands!

There are multiple avenues to take from here like responding to direct messages but I'll leave those for you to explore.

Happy Coding!

Be happy for me

Top comments (2)

Collapse
 
ein_57cd979be5cb96304eab5 profile image
Ein

when I try to load my model, I get this error, and yes lm-studio and sdk is up to date

`Using model:l3.1-8b-dark-planet-slush to respond!
W [LMStudioClient][LLM][ClientPort] Produced communication warning: Received invalid result for rpc, endpointName = getModelInfo, result = {"descriptor":{"identifier":"l3.1-8b-dark-planet-slush","path":"Triangle104/L3.1-8B-Dark-Planet-Slush-Q4_K_S-GGUF/l3.1-8b-dark-planet-slush-q4_k_s.gguf"},"instanceReference":"pI9KTVW/SiRxrnw7tBHbhcSs"}. Zod error:

  • result.sessionIdentifier: Required

This is usually caused by communication protocol incompatibility. Please make sure you are using the up-to-date versions of the SDK and LM Studio.`

Collapse
 
mrdjohnson profile image
DJ Johnson

This happened with someone on github as well, it sounds like the scaffolding code changed maybe? doing a new lms create and moving this code into the new scaffolding seems to work.