DEV Community

prasanth mathesh for AWS Community Builders

Posted on

Machine Learning Predictions using AWS Redshift ML

Introduction

In the previous article, we have seen how to train and infer the predictions for the bring your own Algorithms ( BYOA) models. Standard ML pipeline features to infer the predictions will be either raw format or output of feature engineering pipeline stored in a feature store. Redshift-ML enabled the creation, train, and deployment models using SQL and enables predictions in SQL. This enables the creation of feature stores in Redshift DB and infer the predictions and share the predictions without much overhead or orchestration services.

Redshift-ML

ML provides the below options using SQL.

  1. Create, Train and Deploy the Model
  2. Localize the model in Redshift DB
  3. Infer the predictions for the deployed Model

Additionally, users can bring their own model (BYOM) trained in Amazon sagemaker. The inference can be local and also remote using the sagemaker endpoint.

The reference architecture for BYOM with local inference will be as shown below.

Alt Text

The local inference saves the infra cost of batch transforms and removes the overhead of setting up endpoints for the models especially when models are served in real-time mode. The Redshift cluster scale and can share the predictions via data APIs. The materialized view/table can be created and predictions can be inferred from web apps.

Top comments (0)