Mistral Small 3 1 24B Instruct 2503

by Chutesai

Mistral Small 3.1 is a 24B parameter instruction-tuned model with vision and long-context (128k tokens) capabilities, ideal for fast local inference, function calling, and multimodal understanding across dozens of languages.

HotPublicLLM
290.4K Runs ⸱ 7D
Created 8 months ago

Pricing

Input$0.06/M tokens
Output$0.21/M tokens
0

Start a conversation with chutesai/Mistral-Small-3.1-24B-Instruct-2503

Type a message below to begin