Video GenerationStability AINewOpen Source

Stable Video Diffusion API

Open-source video generation from text or images with smooth motion and temporal consistency.

Parameters

1.5B

Context

N/A

Organization

Stability AI

Pricing

$0.15

per second of video

Try Stable Video Diffusion for Free

Quick Start

Start using Stable Video Diffusion in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.

Python (OpenAI SDK)
pip install openai
import requests

url = "https://api.voltagegpu.com/v1/video/generations"
headers = {
    "Authorization": "Bearer YOUR_VOLTAGE_API_KEY",
    "Content-Type": "application/json"
}

response = requests.post(url, headers=headers, json={
    "model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
    "prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
    "num_frames": 25,
    "fps": 8,
    "width": 1024,
    "height": 576
})

video_url = response.json()["data"][0]["url"]
print(f"Generated video: {video_url}")
cURL
Terminal
curl -X POST https://api.voltagegpu.com/v1/video/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
  -d '{
    "model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
    "prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
    "num_frames": 25,
    "fps": 8,
    "width": 1024,
    "height": 576
  }'

Pricing

ComponentPriceUnit
Video Generation$0.15per second of video

New accounts receive $5 free credit. No credit card required to start.


Capabilities & Benchmarks

Stable Video Diffusion generates 14-25 frame video clips at resolutions up to 576x1024. It supports image-to-video generation (animate a still image), text-to-video generation, motion control parameters, and frame rate adjustment. The model produces temporally coherent videos with realistic motion, camera pans, and subject consistency across frames.


About Stable Video Diffusion

Stable Video Diffusion (SVD) is Stability AI's open-source video generation model that creates high-quality short video clips from text prompts or reference images. It generates 14-25 frame videos at up to 576x1024 resolution with smooth motion, consistent subjects, and temporal coherence. SVD uses a latent video diffusion architecture trained on a large-scale video dataset, producing realistic motion and camera movements. It is ideal for creating product animations, social media clips, and visual effects.


Use Cases

🎬

Product Animations

Create rotating product views, unboxing animations, and product showcase videos.

📱

Social Media Video

Generate short-form video content for TikTok, Instagram Reels, and YouTube Shorts.

✨

Visual Effects

Create motion graphics, animated backgrounds, and visual effect elements.

🎥

Concept Visualization

Bring static concepts to life with animated storyboards and mood videos.

🎮

Game Asset Animation

Generate animated sprites, environment effects, and cinematic sequences.


API Reference

Endpoint

POSThttps://api.voltagegpu.com/v1/video/generations

Headers

AuthorizationBearer YOUR_VOLTAGE_API_KEYRequired
Content-Typeapplication/jsonRequired

Model ID

stabilityai/stable-video-diffusion-img2vid-xt-1-1

Use this value as the model parameter in your API requests.

Example Request

curl -X POST https://api.voltagegpu.com/v1/video/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
  -d '{
    "model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
    "prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
    "num_frames": 25,
    "fps": 8,
    "width": 1024,
    "height": 576
  }'



Frequently Asked Questions

How long are the generated videos?

Stable Video Diffusion generates video clips of 14-25 frames, typically resulting in 1-4 seconds of video depending on the frame rate. For longer videos, you can generate multiple clips and stitch them together.

Can I control the motion in generated videos?

Yes, you can influence the motion through the motion bucket parameter, which controls the amount of motion in the video. Higher values produce more dynamic motion, while lower values create subtle, gentle movements.

What resolution are the output videos?

Stable Video Diffusion generates videos at up to 576x1024 resolution. The model works best at its native resolution and may produce artifacts at significantly different aspect ratios.

Can I animate an existing image?

Yes, Stable Video Diffusion supports image-to-video generation. You can provide a reference image and the model will animate it with natural motion while preserving the original subject and style.


Start using Stable Video Diffusion today

Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.