Back to Articles

Introducing Math Mini & Code Mini: Efficient Models for Everyone

Posted by Enosis Labs Team on May 20, 2025

5 min
AI Models
0 views

We are pleased to introduce Math Mini and share an update on Code Mini, our new families of compact, fast, and efficient models. Math Mini (0.6B and 1.7B versions) is available now, while Code Mini and the 4B versions of Math Mini are currently in development, with Code Mini scheduled for release in mid-June.

These models are derived from advanced architectures like Qwen3 and further refined with Unsloth optimization. Our goal is to make AI capabilities in mathematics and programming more accessible, offering reliable and versatile tools for students, developers, educators, and businesses.

A Focus on Efficiency and Specialization

Math Mini and Code Mini are developed based on established architectures and have been optimized with Unsloth to enhance efficiency and reduce resource consumption. Our fine-tuning process utilizes carefully selected datasets, covering a range of mathematical problems and programming challenges to build specialized capabilities.

The result is a family of models designed to respond effectively, understand context, and address specialized tasks with good precision.

Key Features

  • Compact Model ArchitectureOur model series includes versions starting from 0.6B and 1.7B (currently available for Math Mini), with versions up to 4B planned (Math Mini 4B is in development). This architecture is designed for efficient deployment, even on modest hardware.
  • Optimization with UnslothFine-tuned using Unsloth, a tool known for improving training speed and reducing memory usage. This contributes to faster model responses and lower latency in applications.
  • Specialized TrainingMath Mini is focused on mathematics (algebra, calculus, logic, competition-level problems). Code Mini, currently in development for a mid-June release, will target programming (Python, JavaScript, algorithms, debugging).
  • Versatile FormatsAvailable Math Mini models are offered in 16-bit and GGUF (4-bit, 5-bit, 8-bit) on Hugging Face for broad compatibility. This allows users to choose formats best suited to their environment.

Availability and Getting Started

16-bit Versions
Suitable for servers and environments with moderate resources requiring high precision. (Math Mini 0.6B & 1.7B available now. Code Mini & Math Mini 4B in development)
GGUF Formats
Designed for personal devices, edge computing, and applications requiring high efficiency. Compatible with Llama.cpp and Ollama. (Math Mini 0.6B & 1.7B available now. Code Mini & Math Mini 4B in development)

Available Math Mini Versions

Math Mini 0.6B
GGUF (4b/5b/8b)
Available
Math Mini 1.7B
GGUF (4b/5b/8b)
Available
Math Mini 4B
GGUF
In Development

Explore Our Models and Get Involved

We invite you to try the available versions of Math Mini on Hugging Face and look forward to releasing Code Mini (mid-June). Share your experiences and help us build more accessible and useful AI for everyone.