DEV Community

Cover image for Building a Twitter bot with Go, GPT-3 and AWS
Pol Milian
Pol Milian

Posted on

Building a Twitter bot with Go, GPT-3 and AWS

I've recently been learning Go, as you can read in my previous post. What better way to learn a programming language than with a real project?

Everyone is talking about Twitter and its bots. I thought, how difficult would it be to do it? Is it even legal?

The bot I made is like a "wise Indian guru" that tweets love advice every 2 hours. This advice is generated using the GPT-3 API.

How did I do it?

  • We call the GPT-3 API and get the generated response body. From that, we parse it and trim it to remove blank spaces. This string is going to be the tweet for our bot
  • If all this process is successful, we will then make a POST request to the Twitter API.

How can we trigger this every 2h?

  • I decided to use an AWS Lambda for this. Once the code is uploaded, it's very easy to trigger it every X amount of time using AWS EventBridge + a cron or rate expression.

What do we need?

All the code for this project is open-source, and is available at: https://github.com/el-pol/lovebot. Feel free to fork it & submit PRs πŸ˜‡

Get the Twitter credentials to tweet from the bot

  1. First of all, you need to apply for a Twitter developer account from your main account: https://developer.twitter.com/en. Then, apply for Elevated Access so that your applications can read & write. Here is a good step-by-step: https://www.extly.com/docs/perfect_publisher/user_guide/tutorials/how-to-auto-post-from-joomla-to-twitter/apply-for-a-twitter-developer-account/#apply-for-a-developer-account
  2. Then, create a new regular Twitter account for the bot. I suggest to do this in another browser, so you don’t confuse your Main Account with your Bot Account.
  3. In the bot account settings, find the Managed by section and refer it to your main account. This will add an Automated badge to the bot account and Managed by @yourhandle.
  4. Once you’ve done steps 1-2-3, follow this guide: https://medium.com/geekculture/how-to-create-multiple-bots-with-a-single-twitter-developer-account-529eaba6a576
    1. What we do here is authorize our main account to control our bot account. That’s why we need another authentication pin. This uses Oauth1a.
    2. The final step is different. As of today, you will need to do a POST request instead of visiting the URL: curl --request POST --url '<https://twitter.com/oauth/access_token?oauth_token=><TOKEN>&oauth_verifier=<PIN>'
    3. After this, you should get the response with the key & the token
    4. There is a better way to do this with Oauth2; it’s more secure but a bit more complicated since you will need to re-authenticate with this flow every time you make a new tweet. If you have the time and energy, I suggest you to do it this way. Here is a good example: https://developer.twitter.com/en/docs/authentication/oauth-2-0/user-access-token
  5. By the end, you should have 4 different secret keys:
    1. Main account consumer key
    2. Main account access token secret
    3. Bot acess token (the one you just generated)
    4. Bot access token secret (the one you also just generated)
  6. You will use all of those: the first two to tell Twitter you have a developer account authorized to read/write, and the last two to tell Twitter that you will be tweeting from your bot account. You are really authenticating in two different steps. If you would only want to tweet from your main account, you would only need your own consumer key & secret.

Get the OpenAI API credentials

OpenAI is a paid API, so you will need to go to https://beta.openai.com/overview and add your billing details.

Then, you will get an API key that you can authenticate with. You can see the documentation here: https://beta.openai.com/docs/api-reference/making-requests

As you can see, we need to pass our key to the Authorization Headers.

The code

Believe it or not - the hardest part is over. If you have some experience in programming, what we are going to do next is two sequential POST requests: first to OpenAI to get the text, and next to Twitter to post it from our bot account.

This is the file structure I used:

File structure

  • The fetch package contains our logic for the OpenAI request. We will then import that into main.go
  • In main.go we do the Twitter POST request. This could be refactored into another package for cleaner code.
  • Your .env file should look like this:

Env vars

AWS Lambda-specific warning

Usually, you would put all the code into func main() and it would just work. But when we work with AWS Lambda, we need to add a small modification. We have to put our code in a handler and then put it in func main(), as explained here: https://docs.aws.amazon.com/lambda/latest/dg/golang-handler.html

So for a Lambda, all the code you would normally put in main, has to go into another function and then pass it like this:

func main() {
    lambda.Start(HandleRequest)
}
Enter fullscreen mode Exit fullscreen mode

Where HandleRequest is the function where we will execute our main code.

This is important because it tells the Lambda when to start. I did the mistake of not doing that. Then, the code is executed before the Lambda event starts, so it keeps erroring.

With his Lambda handler, we can pass information to the context, so we can do fun things like passing arguments to our code from HTTP query params. For example: https://aws.amazon.com/premiumsupport/knowledge-center/pass-api-gateway-rest-api-parameters/

Code review

In main.go:

package main

import (
    "context"
    "fmt"
    "log"
    "os"
    "strings"
    "time"

    "github.com/aws/aws-lambda-go/lambda"
    "github.com/dghubble/oauth1"
    "github.com/el-pol/lovebot/fetch"
    "github.com/joho/godotenv"
)

func HandleRequest(ctx context.Context) (string, error) {
    godotenv.Load()

    consumerKey := os.Getenv("CONSUMER_KEY")
    consumerSecret := os.Getenv("CONSUMER_SECRET")
    accessToken := os.Getenv("ACCESS_TOKEN")
    accessSecret := os.Getenv("TOKEN_SECRET")
    prompt := os.Getenv("PROMPT")

    if consumerKey == "" || consumerSecret == "" || accessToken == "" || accessSecret == "" {
        panic("Missing required environment variable")
    }

    // Will return a trimmed string
    fetched := fetch.GetGenerated(prompt)

    // From here on, Twitter POST API
    config := oauth1.NewConfig(consumerKey, consumerSecret)
    token := oauth1.NewToken(accessToken, accessSecret)

    httpClient := config.Client(oauth1.NoContext, token)

    // Necessary - you don't want to be charged for long Lambdas timing out.
    httpClient.Timeout = time.Second * 10

    path := "https://api.twitter.com/2/tweets"

    body := fmt.Sprintf(`{"text": "%s"}`, fetched)

    bodyReader := strings.NewReader(body)

    response, err := httpClient.Post(path, "application/json", bodyReader)

    if err != nil {
        log.Fatalf("Error when posting to twitter: %v", err)
    }

    defer response.Body.Close()
    log.Printf("Response was OK: %v", response)
    return "finished", nil
}

func main() {
    lambda.Start(HandleRequest)
}
Enter fullscreen mode Exit fullscreen mode

The code is very basic - as this is just a fun project for me to learn Go. BTW, if you see any errors or improvements, please let me know.

And then in fetch.go we have:

package fetch

import (
    "bytes"
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    "strings"
    "time"

    "github.com/joho/godotenv"
)

func GetGenerated(prompt string) string {
    godotenv.Load()

    apiKey := os.Getenv("OPENAI_API_KEY")

    if apiKey == "" {
        panic("Missing required environment variable")
    }

    // Create the request body
    jsonBody := fmt.Sprintf(`{"prompt": "%s", "max_tokens": 256, "model": "text-davinci-003"}`, prompt)

    // Create the request
    req, err := http.NewRequest("POST", "https://api.openai.com/v1/completions", bytes.NewBuffer([]byte(jsonBody)))

    if err != nil {
        log.Fatalf("Error when creating request: %v", err)
    }

    // Add the headers
    req.Header.Add("Authorization", fmt.Sprintf("Bearer %s",
        apiKey))
    req.Header.Add("Content-Type", "application/json")

    client := &http.Client{
        Timeout: time.Second * 10,
    }

    resp, err := client.Do(req)

    if err != nil {
        log.Fatalf("Error when sending request: %v", err)
    }

    defer resp.Body.Close()

    // Check the response
    if resp.StatusCode != 200 {
        log.Fatalf("Response was not OK: %v", resp)
    }

    // Parse the response
    var respBody struct {
        Choices []struct {
            Text string `json:"text"`
        } `json:"choices"`
    }

    err = json.NewDecoder(resp.Body).Decode(&respBody)

    if err != nil {
        log.Fatalf("Error when decoding response: %v", err)
    }

    trimmed := strings.TrimSpace(respBody.Choices[0].Text)

    if trimmed == "" {
        log.Fatalln("Result is empty")
    }

    if len(trimmed) >= 280 {
        log.Fatalln("Result is too long")
    }

    return trimmed
}
Enter fullscreen mode Exit fullscreen mode

Packages used

Packaging the code & uploading it to AWS Lambda

For this part, I followed this amazing guide written by Toul: https://dev.to/toul_codes/infrahamburglar-a-cybersecurity-news-bot-built-with-aws-lambda-golang-and-twitter-api-v2-198e

First, we need to build our application & then zip it. This zipped file is what we will uploadto AWS Lambda.

To build in Go, write in your terminal:

GOOS=linux  GOARCH=amd64 go build -o your-twitter-bot main.go
Enter fullscreen mode Exit fullscreen mode

Once this is done, do:

zip your-twitter-bot.zip your-twitter-bot 
Enter fullscreen mode Exit fullscreen mode

This zip file is what you will upload as the code for your Lambda. Every further modification that you do will force you to repeat this two steps & upload again.

Adding the trigger

Every time this Lambda runs, you will post a new tweet. The way to schedule it is using a cron expression using AWS CloudWatch Events, or EventBridge. The reference for EventBridge is here: https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html

Make sure to refer to the docs, AWS’ cron expressions are a bit different from the standard: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html

Before adding the trigger, you should test that your Lambda works. If you go into the Test section of your Lambda, you can trigger it. We don’t care about the content of the event - since our Lambda does not depend on any parameters or arguments. So we can trigger any event, and it should fire and work.

If it worked, you should see a new tweet in your bot account.

If everything is OK, you can add the trigger so that the bot tweets every X amount of time with a cron expression.

Finished!

If you’ve reached the end, you should have a working bot.

This is the one I made, give him a follow: https://twitter.com/LoveAdviceWiseG

View of the bot timeline

Top comments (6)

Collapse
 
callmetarush profile image
Tarush Nagpal

Awesome! I had a very bad experience with lambda and a node project, I strictly stick to my digitalocean and linode these days.

Also, how are you making sure the responses received from chat gpt are unique? Would it not repeat after a few iterations?

Collapse
 
elpol profile image
Pol Milian

Hi Tarush! Yes, DigitalOcean > AWS Lambda, for sure. It was my first choice, actually. I just did not want to pay 3-4$/month just for a 'fun' project.

The responses are very similar if you check the bot's tweets... A further improvement would be to change the prompt in every call; or to store the responses in a DB and make sure they are not equal...

Collapse
 
callmetarush profile image
Tarush Nagpal

What if you query Chat GPT for a prompt and then use that prompt to query again? Haha

Collapse
 
andrewbaisden profile image
Andrew Baisden

Wow, what a very enlightening project.

Collapse
 
euphoricmv profile image
Mladen Stankovic

Nice project! πŸ‘πŸ½

Collapse
 
nikki_eke profile image
Nikki Eke

Quite enlightening. Thank you for sharing