DEV Community

Cover image for Using Runway's "Gen-3 Alpha Turbo" API to Generate AI Videos
nabata
nabata

Posted on

Using Runway's "Gen-3 Alpha Turbo" API to Generate AI Videos

Introduction

Runway is a platform offering the video generation AI Gen-3-Alpha, which I previously covered in an article titled "Converting Images to Video Using the Video Generation AI "Gen-3 Alpha" : The Results Were So Natural, It Was Almost Scary."

Now, Runway has introduced a more cost-effective version called Gen-3 Alpha Turbo, which is accessible through a web API. In this post, I'll explore how to use it.

We are excited to announce the launch of our new API, providing developers with access to our Gen-3 Alpha Turbo model for integration into various applications and products. This release represents a significant step forward in making advanced AI capabilities more accessible to a broader range of developers, businesses and creatives.

Reference:Runway News | Introducing the Runway API for Gen-3 Alpha Turbo

Waitlist Registration

To access the API, you currently need to register on the waitlist.

Fill out the Google Form with your email, name, company, intended use case, and estimated number of videos you'll generate monthly.

After submitting, wait for authorization—I received access about 10 days after applying. Once approved, you'll be asked to enter your organization name when logging in with your registered email.

Gen-3 Alpha Turbo API 2

Purchasing Credits

Video generation costs $0.25 for a 5-second video and $0.50 for a 10-second video.

Gen-3 Alpha Turbo API 3

You can purchase credits on the Billing page, with a minimum purchase of $10.0 (1,000 credits). For $10, you can create either 40 videos of 5 seconds or 20 videos of 10 seconds.

For the latest pricing, check the Price page.

Creating an API Key

Gen-3 Alpha Turbo API 4

To create an API key, navigate to the API Keys page.

Securely copy the key above and store it in a safe place. Once you close this modal, the key will not be displayed again.

The key is only visible once, so make sure to save it before closing the modal.

Environment

I am using macOS 14 Sonoma for this setup and will use Python for the implementation.

$ python --version
Python 3.12.2
Enter fullscreen mode Exit fullscreen mode

Following the quickstart guide, I installed the SDK:

$ pip install runwayml
Collecting runwayml
  Downloading runwayml-2.0.0-py3-none-any.whl (71 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.2/71.2 kB 3.5 MB/s eta 0:00:00

...

Installing collected packages: runwayml
Successfully installed runwayml-2.0.0
Enter fullscreen mode Exit fullscreen mode

Next, I saved the API key as an environment variable in my zshrc file:

$ open ~/.zshrc
Enter fullscreen mode Exit fullscreen mode

I set the environment variable as RUNWAYML_API_SECRET:

export RUNWAYML_API_SECRET=<Your API Key Here>
Enter fullscreen mode Exit fullscreen mode

Be cautious with the variable name—incorrect input will result in this error:

The api_key client option must be set either by passing api_key to the client or by setting the RUNWAYML_API_SECRET environment variable

Original Image

The video will be based on the following image:

Original Image

This image was generated using FLUX 1.1 [pro] with dimensions of 1280x768, matching the Gen-3 Alpha Turbo’s default aspect ratio of 16:9.

For more details about FLUX 1.1 [pro], check out my previous article, "Using the Web API for FLUX 1.1 [pro]: The Latest Image Generation AI Model by the Original Team of Stable Diffusion".

Ensure the image is uploaded to an accessible URL for the API call.

Video Generation

Let’s generate the video.

The code below is based on the quickstart guide, using the prompt "A Japanese woman is smiling happily."

I've added comments for extra parameters.

For more details, refer to the API Reference.

import time
from runwayml import RunwayML

client = RunwayML()

# Create a new image-to-video task using the "gen3a_turbo" model
task = client.image_to_video.create(
    model='gen3a_turbo',
    prompt_image='<Image URL Here>',
    prompt_text='A Japanese woman is smiling happily',  # Must be under 512 characters
    # seed=0,  # Default: random
    # watermark=True,  # Default: False
    # duration=5,  # Default: 10
    # ratio="9:16"  # Default: "16:9"
)
task_id = task.id

# Check for completion
time.sleep(10)  # Wait
task = client.tasks.retrieve(task_id)
while task.status not in ['SUCCEEDED', 'FAILED']:
    time.sleep(10)  # Wait
    task = client.tasks.retrieve(task_id)

print('Task complete:', task)
Enter fullscreen mode Exit fullscreen mode

Once the process completes, it outputs the video URL.

Here is the result:

The video generated as expected, although the movement appears slightly uncanny.

Conclusion

The rapid development and competition in the generative AI space are remarkable.

How long will this trend continue?

Original Japanese Article

Runwayの「Gen-3 Alpha Turbo」のAPIを呼んで動画をAI生成してみた

Top comments (0)