Mistral Small 3.1
Categories:
LLM models
Price: Open Source |
Views: 4 |
Clicks: 0
This open-source multimodal model outperforms its competitors with a 128k token context window. Enjoy an inference speed of 150 tokens per second and excellent image understanding performance, even on a single GPU