DEV Community

Cover image for Create a free OpenAI GPT-3 Bot to Have Someone to Talk to at Work via a Slack app with Serverless
Lolo Code
Lolo Code

Posted on • Originally published at Medium

Create a free OpenAI GPT-3 Bot to Have Someone to Talk to at Work via a Slack app with Serverless

OpenAI invites everyone to test ChatGPT, a new AI-powered chatbot so let’s use it to quickly build a Slack app to communicate with it for free. Yes, we created a library function for this so it’ll be quick and you won't need that much programming skills for this.

Check out a preview of the finished work below.

Image description

To go through the steps required to do this.

  1. We need to create a new Slack app and enable slash commands (I’ve picked /talkhere as you can see in the screen above.
  2. When this command is called via Slack we will call OpenAI via a library function in Lolo. You’ll be able to set temperature and max tokens if you want.
  3. Then we’ll route this data to Slack via another Library function in Lolo allowing us to override the message with the OpenAI answer.

The result will be an answer in Slack from OpenAI based on our prompt.

Check out our serverless Lolo app below as well.

Image description

You have 18$ worth of free tokens in OpenAI that you can use, and that will last a long time. The Lolo app is free. Creating a Slack bot in Slack is free as well so no need to pay anything for this to work.

We tested a bunch of things, cleaning up code, doing translations, basic questions, talking to it, creating python code, correcting messages. You name it.

Getting an OpenAI API Key

You’ll need an OpenAI API key, that you can quickly get just signing up here. Then navigate to API Keys to create a new key. Save the key somewhere, we’ll need it.

Image description

Setting up a Lolo Application

Create a Lolo account here if you don’t have one already. Otherwise, create a new app and then find a HTTP trigger in the menu to the left of the graph.

Image description

Add it to your application and then make sure to set a path and rename it to Slack Webhook.

Psst! We forgot to show this but you should also set method to POST and not GET. Slack will send us a POST request, so if you don’t have the method set to post it won’t work.

Navigate to the HTTP trigger again to copy the external url, we’ll need it in a bit when we set up our Slack command.

Image description

We’ll also set up to send back the required Slack response within 3 seconds in another node right away. So, create a new function in the bottom right corner and add this code block in there.

exports.handler = async(ev, ctx) => {
  const { route, emit } = ctx;

  // Emit response right away to Slack
  emit('response', { statusCode: '200'});

  ev.responseURL = ev.body.response_url;
  ev.message = ev.body.text;

  // route the event object to the next node
  route(ev);
};
Enter fullscreen mode Exit fullscreen mode

You’ll see in the code above that we are also getting the responseURL from the payload. We’ll need this one later to send back a message to Slack.

We’re also collecting the message that we’ll use as the prompt when calling OpenAI and setting it to the event object (so it can be used in another node later).

Image description

Now let’s create a Slack app so we can use this webhook we’ve set up.

Creating a Slack Application

Go here and then press create new app. Unless you want to use an already existing application.

Then navigate to slash commands, and create a new command. You can name it whatever you want but you need to set the external URL that we got from Lolo as the request URL. Slack will send a post request to it when it is hit so we can process that request (make the OpenAI call and so on).

After this make sure to install your application to your workspace.

Image description

We’re done here now so let’s make the OpenAI API call.

Calling OpenAI with the Slack Message as the Prompt

Now we’re heading back to the Lolo app and looking for the Lolo/OpenAI library function in the palette. Where you found the first HTTP trigger.

Open it up to set the required values. Use the OpenAI API key that we got earlier. You can create another one if you’ve lost the first one.

Image description

Here we’ve set temperature as 0.3 which seems to be the amount of creativity you want it to answer with. We’ve also set max_tokens to 4000 so the message doesn’t get cut off. You can several more options to set here but make sure you use the right wording for them. For the first time use these two options or none at all, after that you can fine tune it as you want.

The model is set to davinci which is the more expensive one, you can change this to ada or so. The prompt is set to a dynamic value from the event object that we have access to in each node. Remember that we set ev.message to the text we received from the Slack payload earlier.

That is it here. Go back to your graph.

Send back a Message to Slack with the Response URL

Now we need to look for the Slack library function to send back the response. So again open up the function palette in your graph in Lolo and add the Slack Msg (beta) from the list.

Open it up to configure it. We need to set operation to Slack Response.

Image description

We’ll need to configure this one with a bit of dynamic values here. We have the response url set to the event object and OpenAI will give us back the response in {event.response} so to access the text of the response we will need to set {event.response.data.choices[0].text}

See the exact values you want to set below.

Image description

To understand this, we are getting a payload with the Slack slash command and then saving the response url so we can send an HTTP request to it when we are ready (i.e. when we’ve made the OpenAI request and gotten back the correct data). This library function does this for us though but it does need somewhere to post to, i.e. to this response url.

We’re done here so we can go back and make sure to connect all the nodes now and then Save and Run your application.

Image description

Go to your Logs to see things happen in your Lolo app. To make sure it has been deployed, look for “Listen to port 4000.” Do wait at least a minute.

Now we can test it out. Go to Slack and try out your command with some text you want to ask OpenAI.

Image description

Try out whatever with it. We haven’t tested everything yet and it goes without saying you should be able to do more advanced stuff with this.

❤ Lolo

Top comments (0)