🔥 Qwen/Qwen3-32B
High-performance 32B parameter LLM. Excellent for reasoning, coding, and multilingual tasks.
33.54M runs in 7 days
Open-source video generation from text or images with smooth motion and temporal consistency.
Parameters
1.5B
Context
N/A
Organization
Stability AI
Start using Stable Video Diffusion in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.
import requests
url = "https://api.voltagegpu.com/v1/video/generations"
headers = {
"Authorization": "Bearer YOUR_VOLTAGE_API_KEY",
"Content-Type": "application/json"
}
response = requests.post(url, headers=headers, json={
"model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
"prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
"num_frames": 25,
"fps": 8,
"width": 1024,
"height": 576
})
video_url = response.json()["data"][0]["url"]
print(f"Generated video: {video_url}")curl -X POST https://api.voltagegpu.com/v1/video/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
"prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
"num_frames": 25,
"fps": 8,
"width": 1024,
"height": 576
}'| Component | Price | Unit |
|---|---|---|
| Video Generation | $0.15 | per second of video |
New accounts receive $5 free credit. No credit card required to start.
Stable Video Diffusion generates 14-25 frame video clips at resolutions up to 576x1024. It supports image-to-video generation (animate a still image), text-to-video generation, motion control parameters, and frame rate adjustment. The model produces temporally coherent videos with realistic motion, camera pans, and subject consistency across frames.
Stable Video Diffusion (SVD) is Stability AI's open-source video generation model that creates high-quality short video clips from text prompts or reference images. It generates 14-25 frame videos at up to 576x1024 resolution with smooth motion, consistent subjects, and temporal coherence. SVD uses a latent video diffusion architecture trained on a large-scale video dataset, producing realistic motion and camera movements. It is ideal for creating product animations, social media clips, and visual effects.
Create rotating product views, unboxing animations, and product showcase videos.
Generate short-form video content for TikTok, Instagram Reels, and YouTube Shorts.
Create motion graphics, animated backgrounds, and visual effect elements.
Bring static concepts to life with animated storyboards and mood videos.
Generate animated sprites, environment effects, and cinematic sequences.
https://api.voltagegpu.com/v1/video/generations| Authorization | Bearer YOUR_VOLTAGE_API_KEY | Required |
| Content-Type | application/json | Required |
stabilityai/stable-video-diffusion-img2vid-xt-1-1Use this value as the model parameter in your API requests.
curl -X POST https://api.voltagegpu.com/v1/video/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
-d '{
"model": "stabilityai/stable-video-diffusion-img2vid-xt-1-1",
"prompt": "A cat sitting on a windowsill watching rain fall outside, gentle motion",
"num_frames": 25,
"fps": 8,
"width": 1024,
"height": 576
}'Great price-performance for smaller models with 24GB VRAM.
Enterprise-grade GPU for production inference at scale.
Access this model and 140+ others through our OpenAI-compatible API.
Compare GPU cloud pricing and model hosting features.
View GPU compute and AI inference pricing with no hidden fees.
Deploy a GPU pod in under 60 seconds to run models locally.
Stable Video Diffusion generates video clips of 14-25 frames, typically resulting in 1-4 seconds of video depending on the frame rate. For longer videos, you can generate multiple clips and stitch them together.
Yes, you can influence the motion through the motion bucket parameter, which controls the amount of motion in the video. Higher values produce more dynamic motion, while lower values create subtle, gentle movements.
Stable Video Diffusion generates videos at up to 576x1024 resolution. The model works best at its native resolution and may produce artifacts at significantly different aspect ratios.
Yes, Stable Video Diffusion supports image-to-video generation. You can provide a reference image and the model will animate it with natural motion while preserving the original subject and style.
Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.