DEV Community

Cover image for Build a Responsible AI Career Counselor Copilot with Azure Content Safety Service
Seena Khan
Seena Khan

Posted on

Build a Responsible AI Career Counselor Copilot with Azure Content Safety Service

Hello Tech enthusiasts, here is my another blog on Building a Responsible AI Career Counselor Copilot with Azure Content Safety Service

In today’s rapidly evolving digital landscape, the integration of artificial intelligence (AI) into various sectors is transforming the way we live and work. One of the most promising applications of AI is in career counseling, where intelligent systems can provide personalized guidance and support to individuals navigating their professional journeys. However, with great power comes great responsibility. Ensuring that AI-driven career counseling tools are ethical, unbiased, and safe is paramount.

By leveraging the Azure Content Safety Service, we can build a Responsible AI Career Counselor Copilot that not only offers valuable career advice but also adheres to the highest standards of content safety and ethical AI practices. This blog explores how Azure’s robust content safety features can be employed to create an AI career counselor that is both effective and responsible, ensuring a positive and secure user experience.

Following are the objectives of this blog:

Objectives:-

Tasks
  • Create an Azure Content Safety resource.
  • Create Copilot by using Microsoft Copilot Studio.
  • Enable Generative AI
  • Create Topics
  • Add Knowledge
  • Test and publish in a Demo website.
Prerequisites:-
  • Access to Microsoft Azure.
  • Access to Microsoft Copilot Studio.
  • Basic understanding of Microsoft Power Platform
  • Experience in administering solutions in Microsoft Azure is preferred.

Here’s a step-by-step guide to help you build a responsible ai career counselor copilot.

Task 1: Create an Azure Content Safety resource.

In this task, you create an azure content safety resource.

  • Head over to Microsoft Azure portal.
  • In the Azure global search, look for Azure content safety service.
    Then select the Content safety service from the list.

  • Select + Create to create the content safety resource.

Image description

  • Enter the following details and select Review + create.

Image description

  • Once created the resource, select Go to resource and expand Resource Management, then select Keys and endpoints. Copy any of the Keys and URL of the resource and paste in a notepad for future use.

Image description

Task 2 : Create Copilot by using Microsoft Copilot Studio.

  • Go to Microsoft Copilot Studio, Select Agents and Select + New agent.

  • Enter Name, Description, Instructions and Knowledge then select Create.

Image description

Image description

  • Select Settings from the top right corner.

Image description

  • Select Generative AI then select Generative (preview) - Use generative AI to respond with the best combination of actions, topics, and knowledge. and select Save.

Image description

  • Close the window and select Topics, then select + add topics and select Create from description with Copilot.

Image description

  • Enter Name your topic and Create a topic to .... then select Create.

Image description

  • To add a variable to the careertopic, please select Add node then select Variable Management then select Set variable value.

Image description

  • Under Custom click Create new.

Image description

  • Select variable properties and enter variable name as varUserQtn. Under Usage please select Topics(limited scope).

Image description

  • Select on To value and under System tab select Activity.Text.

Image description

  • Add the following step to utilize the Azure Content Safety API to validate the user query and check for any dangerous content:

    • Select Advanced and then select Send HTTP request. Image description
  • After adding HTTP request node, please enter the following details:

    URL: Enter your copilot url.

    Method : Post.

  • Click on Edit under Headers and Body and clikc + Add.

Image description

  • Enter Key as content-type and Value as application/json then click + Add.

Image description

  • Set the key as Ocp-Apim-Subscription-Key and Add the value to correspond to the content safety endpoint key that you obtained from Azure.

Image description

  • SCroll down and Select Json content and Edit formula under body section. Enter the below code to the body tag.
{
    Text:Topic.varUserQtn
}
Enter fullscreen mode Exit fullscreen mode

Image description

  • Select Response data as ** From sample data** and click on Get schema from sample json enter the following Json code:
{
      "dangerouscontent": [],
      "categories": [
          {
              "category": "Hate",
              "severity": 2
          },
          {
              "category": "SelfHarm",
              "severity": 0
          },
          {
              "category": "Sexual",
              "severity": 0
          },
          {
              "category": "Violence",
              "severity": 0
          }
      ]
  }
Enter fullscreen mode Exit fullscreen mode

Then select Confirm.

  • Create a new variable called varQtnOutput to store the content safety API output by clicking on Select a variable under Save response as.

Image description

  • Please select Create new variable and provide variable name as varQtnOutput.

  • Create a variable named varSeverity and add the below formula under formula tab:

First(Filter(Topic.varQtnOutput.categories, category = "Hate")).severity
Enter fullscreen mode Exit fullscreen mode

Then click Insert.

Image description

  • Create a variable named varSeverityself and add the below formula under formula tab:
First(Filter(Topic.varQtnOutput.categories, category = "SelfHarm")).severity
Enter fullscreen mode Exit fullscreen mode
  • Create a variable named varSeveritysexual and add the below formula under formula tab:
First(Filter(Topic.varQtnOutput.categories, category = "Sexual")).severity
Enter fullscreen mode Exit fullscreen mode
  • Create another variable named varSeverityviolence and add the below formula:
First(Filter(Topic.varQtnOutput.categories, category = "Violence")).severity
Enter fullscreen mode Exit fullscreen mode
  • To find out whether there is a content safety problem of any kind, create a new variable called varSafe and enter the below formula:
 If(
     Topic.varSeverity = 0 && 
     Topic.varSeverityself = 0 && 
     Topic.varSeveritysexual = 0 && 
     Topic.varSeverityviolence = 0,
     "Safe",
     "Unsafe"
 )
Enter fullscreen mode Exit fullscreen mode
  • Add a condition action to check the varSafe variable.

Image description

  • On condition action select the variable varSafe and the for is equal to enter Safe.

Image description

  • Add the generative answers node to the positive branch by selecting Generative Answers from the Advanced section.

Image description

  • Select the variable varUserQtn and click on edit under Data sources, then click on Add Knowledge.

Image description

  • In the negative flow branch, add a message block.

Image description

  • Then enter the below expression to check the 4 variables and provide appropriate messages to the user:
If(
      Topic.varSeverity > 0,
      "There is hate speech in the question. Kindly rephrase your query.",
      ""
  ) & 
  If(
      Topic.varSeverityself > 0,
      "Self-harm is indicated by the question. Please get help right away or get in touch with an expert.",
      ""
  ) &
  If(
      Topic.varSeveritysexual > 0,
      "There is improper sexual content in the query. Kindly reword your query.",
      ""
  ) &
  If(
      Topic.varSeverityviolence > 0,
      "There are references to violence in the question. Kindly rephrase your query.",
      ""
  )
Enter fullscreen mode Exit fullscreen mode

Image description

As therefore, we have finished configuring the copilot and made sure that appropriate content safety is examined prior to producing contextual responses from career website.

Test and publish in a Demo website.

Top comments (0)