Stable Video Diffusion API
Open-source video generation from text or images with smooth motion and temporal consistency.
Parameters
1.5B
Context
N/A
Organization
Stability AI
Quick Start
Start using Stable Video Diffusion in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
import requests
url = "https://api.voltagegpu.com/v1/video/generations"
headers = {
"Authorization": "Bearer YOUR_VOLTAGE_API_KEY",
"Content-Type": "application/json"
}
response = requests.post(url, headers=headers, json={
"model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
"prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
"num_frames": 25,
"fps": 8,
"width": 1024,
"height": 576
})
video_url = response.json()["data"][0]["url"]
print(f"Generated video: {video_url}")curl -X POST https://api.voltagegpu.com/v1/video/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
"prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
"num_frames": 25,
"fps": 8,
"width": 1024,
"height": 576
}'Pricing
| Component | Price | Unit |
|---|---|---|
| Video Generation | $0.15 | per second of video |
New accounts receive $5 free credit. No credit card required to start.
Capabilities & Benchmarks
Stable Video Diffusion generates 14-25 frame video clips at resolutions up to 576x1024. It supports image-to-video generation (animate a still image), text-to-video generation, motion control parameters, and frame rate adjustment. The model produces temporally coherent videos with realistic motion, camera pans, and subject consistency across frames.
About Stable Video Diffusion
Stable Video Diffusion (SVD) is Stability AI's open-source video generation model that creates high-quality short video clips from text prompts or reference images. It generates 14-25 frame videos at up to 576x1024 resolution with smooth motion, consistent subjects, and temporal coherence. SVD uses a latent video diffusion architecture trained on a large-scale video dataset, producing realistic motion and camera movements. It is ideal for creating product animations, social media clips, and visual effects.
Use Cases
Product Animations
Create rotating product views, unboxing animations, and product showcase videos.
Social Media Video
Generate short-form video content for TikTok, Instagram Reels, and YouTube Shorts.
Visual Effects
Create motion graphics, animated backgrounds, and visual effect elements.
Concept Visualization
Bring static concepts to life with animated storyboards and mood videos.
Game Asset Animation
Generate animated sprites, environment effects, and cinematic sequences.
API Reference
Endpoint
https://api.voltagegpu.com/v1/video/generationsHeaders
| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
Model ID
stabilityai/stable-video-diffusion-img2vid-xt-1-1Use this value as the model parameter in your API requests.
Example Request
curl -X POST https://api.voltagegpu.com/v1/video/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
"prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
"num_frames": 25,
"fps": 8,
"width": 1024,
"height": 576
}'Related Models
Stable Video Diffusion — Related Resources
Confidential Compute
Run this model on hardware-sealed GPUs with Intel TDX attestation.
Confidential AI Inference
OpenAI-compatible API with TEE-attested model serving.
Pricing
Confidential Compute and AI Inference pricing with no hidden fees.
Browse Confidential GPUs
H200, H100, B200 with hardware-sealed Intel TDX compute.
Frequently Asked Questions
How long are the generated videos?
Stable Video Diffusion generates video clips of 14-25 frames, typically resulting in 1-4 seconds of video depending on the frame rate. For longer videos, you can generate multiple clips and stitch them together.
Can I control the motion in generated videos?
Yes, you can influence the motion through the motion bucket parameter, which controls the amount of motion in the video. Higher values produce more dynamic motion, while lower values create subtle, gentle movements.
What resolution are the output videos?
Stable Video Diffusion generates videos at up to 576x1024 resolution. The model works best at its native resolution and may produce artifacts at significantly different aspect ratios.
Can I animate an existing image?
Yes, Stable Video Diffusion supports image-to-video generation. You can provide a reference image and the model will animate it with natural motion while preserving the original subject and style.
Start using Stable Video Diffusion today
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.