DEV Community

Ryan Nazareth for AWS Community Builders

Posted on

What movie to watch next ? Amazon Personalize to the rescue - Part 2

In the first part of this blog, we used AWS Step Functions to orchestrate a workflow to run a Glue Job on data from S3, trigger an import job in Personalize and train a model (recipe). In this section, we will focus on deploying the model and getting batch and realtime recommendations for movies.The designed architecture is as shown in the screenshot below. Again, all scripts referenced in code snippets in this blog can be found in my github repository.

personalize-recommendation-workflow

With the User-Personalization recipe, Amazon Personalize generates scores for items based on a user's interaction data and metadata. These scores represent the relative certainty that Amazon Personalize has in whether the user will interact with the item next. Higher scores represent greater certainty as described in the documentation. Amazon Personalize scores all the items in your catalog relative to each other on a scale from 0 to 1 (both inclusive), so that the total of all scores equals 1. For example, if you're getting movie recommendations for a user and there are three movies in the Items dataset, their scores might be 0.6, 0.3, and 0.1. Similarly, if you have 1,000 movies in your inventory, the highest-scoring movies might have very small scores (the average score would be.001), but, because scoring is relative, the recommendations are still valid. Please refer to the docs for further details on this.

Personalize Batch Inference Job

The CloudFormation stack created in [part 1], should have deployed the necessary resources to run the batch job, i.e a lambda function which gets triggered when input data is added to S3, and creates the batch job in Personalize. The lambda function will automatically trigger either a batch segment job or a batch inference job depending on the filename. A users.json file, is assumed to have user ids for which we required item recommendations.

{"userId": "4638"}
{"userId": "663"}
{"userId": "94"}
{"userId": "3384"}
{"userId": "1030"}
{"userId": "162540"}
{"userId": "15000"}
{"userId": "13"}
{"userId": "50"}
{"userId": "80000"}
{"userId": "20000"}
{"userId": "110000"}
{"userId": "5000"}
{"userId": "9000"}
{"userId": "34567"}
Enter fullscreen mode Exit fullscreen mode

This will trigger a batch inference job using the solution version Arn specified as the lambda environment variable (defined through the CloudFormation stack parameters) . We will be using the solution version trained using the USER_PERSONALIZATION recipe. An items.json file on the other hand, will trigger a batch segment job, and should be of the format as below:

{"itemId": "1240"}
{"itemId": "33794"}
{"itemId": "89745"}
{"itemId": "89747"}
{"itemId": "89753"}
{"itemId": "1732"}
{"itemId": "8807"}
{"itemId": "7153"}
{"itemId": "44"}
{"itemId": "165"}
{"itemId": "307"}
{"itemId": "306"}
{"itemId": "457"}
{"itemId": "586"}
{"itemId": "588"}
{"itemId": "589"}
{"itemId": "596"}
Enter fullscreen mode Exit fullscreen mode

This will return a list of users with highest probabilites for recommending the items to. Note that a batch segment job requires the solution to be trained with a USER_SEGMENTATION recipe and will throw an error if another recipe is used. This will require training a new solution with this recipe and is beyond the scope of this tutorial.The lambda config should look as below with the event trigger set as S3.

A second lambda function which runs a transform operation when the results from the batch job are added to S3 from Personalize. If successful, a notification is sent to SNS topic configured with email as endpoint to send alert when the workflow completes. The output of the batch job from personalize, returns a json in the following format:

{"input":{"userId":"1"},"output":{"recommendedItems":[....],"scores":[....]},"error":null}
{"input":{"userId":"2"},"output":{"recommendedItems":[....],"scores":[...]},"error":null}
......
.....
Enter fullscreen mode Exit fullscreen mode

With the transformation, we intend to return a structured dataset serialised in parquet format (with snappy compression), with the following schema:

  • userID: integer
  • Recommendations: string

The movie id is mapped to the title and associated genre and release year or each user as below . Each recommendation is separated by a | delimiter.

   userId                           Recommendations
0    1       Movie Title (year) (genre) | Movie Title (year) (genre) | ....
1    2       Movie Title (year) (genre) | Movie Title (year) (genre) | ....
......
Enter fullscreen mode Exit fullscreen mode

This also uses a lambda layer with the AWS managed DataWrangler layer, so the pandas and numpy libraries are available. The configuration, should look like below, with the lambda layer and destination as SNS.

To trigger the batch inference job workflow, copy the sample users.json batch data to s3 path below.

aws s3 cp datasets/personalize/ml-25m/batch/input/users.json s3://recommendation-sample-data/movie-lens/batch/input/users.json
Enter fullscreen mode Exit fullscreen mode

This creates a batch inference job with the job name having the unixtimestamp affixed to the end. We should receive a notification via email, when the entire workflow completes. The outputs of the batch job and subsequent transformation, should be visible in the bucket with keys movie-lens/batch/results/inference/users.json.out and
movie-lens/batch/results/inference/transformed.parquet respectively. These have also been copied and stored here.

    userId                                    Recommendations
0    15000  Kiss the Girls (1997) (Crime) | Scream (1996) ...
1   162540  Ice Age 2: The Meltdown (2006) (Adventure) | I...
2     5000  Godfather, The (1972) (Crime) | Star Wars: Epi...
3       94  Jumanji (1995) (Adventure) | Nell (1994) (Dram...
4     4638  Inglourious Basterds (2009) (Action) | Watchme...
5     9000  Die Hard 2 (1990) (Action) | Lethal Weapon 2 (...
6      663  Crow, The (1994) (Action) | Nightmare Before C...
7     1030  Sister Act (1992) (Comedy) | Lethal Weapon 4 (...
8     3384  Ocean's Eleven (2001) (Crime) | Matrix, The (1...
9    34567  Lord of the Rings: The Fellowship of the Ring,...
10      50  Grand Budapest Hotel, The (2014) (Comedy) | He...
11   80000  Godfather: Part II, The (1974) (Crime) | One F...
12  110000  Manhattan (1979) (Comedy) | Raging Bull (1980)...
13      13  Knocked Up (2007) (Comedy) | Other Guys, The (...
14   20000  Sleepless in Seattle (1993) (Comedy) | Four We...
Enter fullscreen mode Exit fullscreen mode

Creating a Campaign for realtime recommendations

A campaign is a deployed solution version (trained model) with provisioned dedicated transaction capacity for creating real-time recommendations for your application users. After you complete Preparing and importing data and Creating a solution, you are ready to deploy your solution version by creating an AWS Personalize Campaign.If you are getting batch recommendations, you don't need to create a campaign.

$ python projects/personalize/deploy_solution.py --campaign_name MoviesCampaign --sol_version_arn <solution_version_arn> --mode create

2022-07-09 21:12:08,412 - deploy - INFO - Name: MoviesCampaign
2022-07-09 21:12:08,412 - deploy - INFO - ARN: arn:aws:personalize:........:campaign/MoviesCampaign
2022-07-09 21:12:08,412 - deploy - INFO - Status: CREATE PENDING
Enter fullscreen mode Exit fullscreen mode

An additional arg --config can be passed, to set the explorationWeight and explorationItemAgeCutOff parameters for the User Personalizaion Recipe. These parameters default to 0.3 and 30.0 respectively if not passed (as in previous example)
To set the explorationWeight and ItemAgeCutoff to 0.6 and 100 respectively, run the script as below:

$ python projects/personalize/deploy_solution.py --campaign_name MoviesCampaign --sol_version_arn <solution_version_arn> \
--config "{\"itemExplorationConfig\":{\"explorationWeight\":\"0.6\",\"explorationItemAgeCutOff\":\"100\"}}" --mode create

2022-07-09 21:12:08,412 - deploy - INFO - Name: MoviesCampaign
2022-07-09 21:12:08,412 - deploy - INFO - ARN: arn:aws:personalize:........:campaign/MoviesCampaign
2022-07-09 21:12:08,412 - deploy - INFO - Status: CREATE PENDING
Enter fullscreen mode Exit fullscreen mode

Setting up API Gateway with Lambda Proxy Integration

You can also get real-time recommendations from Amazon Personalize with a campaign created earlier to give movie recommendations.To increase recommendation relevance, include contextual metadata for a user, such as their device type or the time of day, when you get recommendations or get a personalized ranking. The API Gateway integration with lambda backend, should already be configured if CloudFormation was run successfully. We have configured the method request to accept a querystring parameter user_id and defined model schema. An API method can be integrated with Lambda using one of two integration methods: Lambda proxy integration or Lambda non-proxy (custom) integration.

By default, we use Lambda Proxy Integration when creating the resource in CloudFormation, which allows the client to call a single lambda function in the backend. When a client submits a request, API Gateway sends the raw request to lambda without necessarily preserving the order of the parameters. This request data includes the request headers, query string parameters, URL path variables, payload, and API configuration data as detailed here.

We could also use Lambda non-proxy integration by setting the template parameter APIGatewayIntegrationType to AWS. The difference to the Proxy Integration method is that in addition, we also need to configure a mapping template to map the incoming request data to the integration request, as required by the backend Lambda function. In the CloudFormation template personalize_predict.yaml, this is already predefined in the RequestTemplates property of the ApiGatewayRootMethod resource, which translates the user_id query string parameter to the user_id property of the JSON payload. This is necessary because input to a Lambda function in the Lambda function must be expressed in the body. However, as the default type is set to AWS_PROXY, the mapping template is ignored as it is not required.

api-gateway-get-method-execution

The API endpoint URL to be invoked should be visible from the console, under the stage tab.

api-gateway-dev-stage-console

Invocation and Monitoring with AWS X-Ray

The API can be tested by opening a browser and typing the URL into a browser address bar along with the querystring parameters. For example, https://knmel67a1g.execute-api.us-east-1.amazonaws.com/dev?user_id=5, will generate recommendations for user with id 5.
For monitoring, we have also configured API gateway to send traces to X-Ray and logs to CloudWatch. Since the API is integrated with a single lambda function, you will see nodes in the service map containing information about the overall time spent and other performance metrics in the API Gateway service, the Lambda service, and the Lambda function. The timeline shows the hierarchy of segments and subsegments. Further details on request/response times and faults/errors can be found by clicking on each segment/subsegment in the timeline. For further information, refer to following AWS documentation on using AWS X-Ray service maps and trace views with API Gateway.

Xrayconsole-APIGateway-lambda
Xrayconsole-trace-timeline

Top comments (0)