DEV Community

Joran Quinten
Joran Quinten

Posted on

Automated Moderation using OpenAI

To get started, you need to sign up at OpenAI and generate a key. Next we'll setup a simple Nuxt project. Just use the following command to scaffold out a starter project and adding the openai package (nuxt docs prefer using yarn but feel free to follow the npm or pnpm steps):

npx nuxi init openai.moderation
cd openai.moderation
yarn install
yarn add openai
yarn dev -o
Enter fullscreen mode Exit fullscreen mode

This should result in the starter project running on (typically) http://localhost:3000. Now open the project in your favourite IDE, and let's get started!

Configure the key

Create an .env file in the root of the project containing this line (replace with your personal key):

OPENAI_API_KEY=ALWAYSKEEPSECRETSTOYOURSELF
Enter fullscreen mode Exit fullscreen mode

Next, open the nuxt.config.ts and make sure it looks like this:

export default defineNuxtConfig({
    runtimeConfig: {
        OPENAI_API_KEY: process.env.OPENAI_API_KEY,
    },
})
Enter fullscreen mode Exit fullscreen mode

Setting up the API

In order to communicate to the OpenAI endpoint, we'll need a server of our own. In Nuxt, adding an API endpoint is just as easy as adding a file in a server/api folder. So first create that folder structure and place the following in a file called moderate.post.ts:

export default defineEventHandler(event => {
    const body = await readBody(event)
    return body?.message
})
Enter fullscreen mode Exit fullscreen mode

This will just return whatever we post to the /api/moderate endpoint (Nuxt will set up the routing for us).

The input component

We're going to create a small component that just takes in text input and will hit the endpoint we've set up when submitting, so that we can validate the response.

Create a Moderate.vue component in a components folder in the root of the project.

Let's start by defining the scripts using the script setup notation:

<script setup lang="ts">
const input = ref("");
const result = ref([]);

const onSubmit = async () => {
  const response = await $fetch("/api/moderate", {
    method: "post",
    body: { message: input.value },
  });
  result.value.unshift(response);
  input.value = "";
};
</script>
Enter fullscreen mode Exit fullscreen mode

First, we're setting up handle to take care of the input and the result and we're defining a hander to call the endpoint we've already setup, appending the input as a message property on the body. (The .value refers to the mutable and reactive reference)

Now we'll add a template with:

  • A small form containing an input;
  • A submit button that will call the onSubmit handler;
  • A place to display the output of the endpoint

You can style it however you want, it's however not really the purpose of this tutorial. Just go ahead and paste this below the script tag:

<template>
  <div>
    <div class="input">
      <input type="text" v-model="input" />
      <button type="submit" @click="onSubmit">Validate moderation</button>
    </div>
    <div class="output">
      <ul>
        <li :key="i.id" v-for="i in result">
          {{ i.results }}
        </li>
      </ul>
    </div>
  </div>
</template>
Enter fullscreen mode Exit fullscreen mode

Now save this file and let's load the component on the app.vue, by replacing it's contents with this:

<template>
  <div>
    <Moderate />
  </div>
</template>
Enter fullscreen mode Exit fullscreen mode

You should now see the component running on your localhost. Once insert some text and hit submit, it should be returned by our own endpoint and show up in the component as part of the list item.

Adding intelligence

Finally we'll update the moderate.post.ts file to make use of the OpenAI capabilities. The moderation API is one of the more straightforward ones, so it's a good one to get started with. Instead of returning the body.message immediately, we'll first configure the OpenAI client by instantiating it with the key. Then we'll query the endpoint with the contents of the message. This means we also need to change the handler to an async function! The file should look like this:

import { Configuration, OpenAIApi } from 'openai';

// it's an async function now!

export default defineEventHandler(async (event) => {
    const body = await readBody(event)

// setup the configuration

    const configuration = new Configuration({
        apiKey: process.env.OPENAI_API_KEY,
    });

// instantiate the openaiClient

    const openaiClient = new OpenAIApi(configuration);


// Make the call to the moderation endpoint

    const res = await openaiClient.createModeration({
        input: body?.message,
      });


// return the result

    return res.data
})
Enter fullscreen mode Exit fullscreen mode

That's it. So you now have the opportunity to test this out be being very aggressive towards the input field. You should see an assessment of your input by various categories and grades, similar to this example:

{
  "id": "modr-XXXXX",
  "model": "text-moderation-001",
  "results": [
    {
      "categories": {
        "hate": false,
        "hate/threatening": false,
        "self-harm": false,
        "sexual": false,
        "sexual/minors": false,
        "violence": false,
        "violence/graphic": false
      },
      "category_scores": {
        "hate": 0.18805529177188873,
        "hate/threatening": 0.0001250059431185946,
        "self-harm": 0.0003706029092427343,
        "sexual": 0.0008735615410842001,
        "sexual/minors": 0.0007470346172340214,
        "violence": 0.0041268812492489815,
        "violence/graphic": 0.00023186142789199948
      },
      "flagged": false
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

If you're done with this example, one of the fun ways to play around with OpenAI is by using the image generation API. With the basis we've laid you should be capable of either modifying the existing code, or making your own integration in a framework you prefer.

Using these sort of tools could help you a lot when dealing with publishing user generated content. Bear in mind though, that this is just an example and not a real world implementation. Also, as OpenAI suggests, always keep some human eyes on hand when dealing with these sort of things. A valid use case for this example would be to preemptively flag submissions before publishing.

Using AI to reduce the load in humans without completely removing them, would be sensible and a good use of current capabilities. AI, just as humans, still has flaws, but we can utilise it to assist us in simple tasks.

Top comments (4)

Collapse
 
pitterpacker profile image
Pitter Packer

Thank for sharing. I just found out about GPT and it's been helping me a lot in my work and learning. I also just discovered an extension for search engines. It's very convenient, everyone should try installing it and see how it goes. You need to log in to the main OpenAI page and use it on search engines. ChatGPT for Search Engines

Collapse
 
kissu profile image
Konstantin BIFERT

Nice article!
Maybe consider adding a bit of highlight to the code for it to be even more readable! πŸ‘ŒπŸ»

Also, I was wondering if you got any kind of false positives? Some things being moderated while they are totally fine?

Collapse
 
jquinten profile image
Joran Quinten

To be fair I didn't do that many tries to validate an n number of tries. That's why you keep the human in the loop. Thanks for the feedback!

Collapse
 
johnse8899 profile image
johnse8899

Introduce you to a utility tool ChatGPT without having to go to openai's website.