Mistral Nemo Instruct 2407

by Unsloth

Unsloth is a finetuning framework that enables 2-5x faster training of open-source LLMs (Llama, Mistral, Gemma) with 70% less memory usage, useful for efficiently adapting pre-trained models to custom datasets.

HotPublicLLM
444.0K Runs βΈ± 7D
Created 9 months ago

Pricing

Input$0.04/M tokens
Output$0.08/M tokens
0

Start a conversation with unsloth/Mistral-Nemo-Instruct-2407

Type a message below to begin