DEV Community

Ricardo Čerljenko for Lloyds digital

Posted on

Use Laravel with OpenAI to validate inappropriate content

Recently I stumbled across OpenAI Moderations API which gives you a way to query OpenAI in order to detect if input text contains inappropriate content such as hate, violence, etc.

API is completly free - you just need to create an OpenAI account and issue a fresh API token.

Since I'm primary a Laravel PHP Framework developer, I decided to make a package which will provide a way to validate request payload fields against OpenAI Moderations API.

Installation

Standard Composer package installation:

composer require rcerljenko/laravel-openai-moderation
Enter fullscreen mode Exit fullscreen mode

Usage

  1. Publish config and translation files.
php artisan vendor:publish --provider="RCerljenko\LaravelOpenAIModeration\LaravelOpenAIModerationServiceProvider"
Enter fullscreen mode Exit fullscreen mode
  1. Set your OpenAI API key and enable package via newly created config file => config/openai.php

  2. Use provided rule with your validation rules.

<?php

namespace App\Http\Requests;

use Illuminate\Foundation\Http\FormRequest;
use RCerljenko\LaravelOpenAIModeration\Rules\OpenAIModeration;

class StoreText extends FormRequest
{
 /**
  * Determine if the user is authorized to make this request.
  */
 public function authorize(): bool
 {
  return true;
 }

 /**
  * Get the validation rules that apply to the request.
  */
 public function rules(): array
 {
  return [
   'text' => ['required', 'string', new OpenAIModeration],
  ];
 }
}
Enter fullscreen mode Exit fullscreen mode

And that's it! Your content can now be validated with powerfull (and yet free) OpenAI Moderations API.

Thank you for reading this! If you've found this interesting, consider leaving a ❤️, 🦄 , and of course, share and comment on your thoughts!

Lloyds is available for partnerships and open for new projects. If you want to know more about us, check us out.

Also, don’t forget to follow us on Instagram and Facebook!

Top comments (0)