Back to Articles

Introducing Math Mini & Code Mini: Efficient Models for Everyone

Enosis Labs Team

We are pleased to introduce Math Mini and share an update on Code Mini, our new families of compact, fast, and efficient models. Math Mini (0.6B and 1.7B versions) is available now, while Code Mini and the 4B versions of Math Mini are currently in development, with Code Mini scheduled for release in mid-June. These models are derived from advanced architectures like Qwen3 and further refined with Unsloth optimization. Our goal is to make AI capabilities in mathematics and programming more accessible, offering reliable and versatile tools for students, developers, educators, and businesses.

In a field often requiring substantial infrastructure, these mini models are designed to offer new possibilities. We aim to enable users to address mathematical problems on common devices and support real-time code generation, focusing on speed and efficiency. This initiative seeks to empower more individuals and organizations to experiment, learn, and create with AI by reducing technical and economic barriers.

Math Mini and Code Mini Introduction

Math Mini & Code Mini: Accessible and efficient specialized AI models by Enosis Labs.

A Focus on Efficiency and Specialization

Math Mini and Code Mini are developed based on established architectures and have been optimized with Unsloth to enhance efficiency and reduce resource consumption. Our fine-tuning process utilizes carefully selected datasets, covering a range of mathematical problems and programming challenges to build specialized capabilities.

"The result is a family of models designed to respond effectively, understand context, and address specialized tasks with good precision."

Through community engagement and ongoing development, these models are tested in various scenarios, such as educational notebooks and lightweight development environments. Math Mini and Code Mini represent our commitment to the convergence of accessibility, specialization, and operational efficiency.

The Value of Mini Models

While model sizes in AI are often increasing, we see significant value in models that adapt to diverse user needs. Mini models offer several key advantages:

  • Deploying AI on edge devices, mobile phones, and economical servers, thereby increasing access.
  • Reducing energy consumption and infrastructure costs, contributing to more sustainable AI practices.
  • Facilitating integration into educational applications, personal assistants, and productivity tools.
  • Encouraging experimentation and learning for communities and projects with varied resource levels.

Key Features

  • Compact Model Architecture

    Our model series includes versions starting from 0.6B and 1.7B (currently available for Math Mini), with versions up to 4B planned (Math Mini 4B is in development). This architecture is designed for efficient deployment, even on modest hardware.

  • Optimization with Unsloth

    Fine-tuned using Unsloth, a tool known for improving training speed and reducing memory usage. This contributes to faster model responses and lower latency in applications.

  • Specialized Training

    Math Mini is focused on mathematics (algebra, calculus, logic, competition-level problems). Code Mini, currently in development for a mid-June release, will target programming (Python, JavaScript, algorithms, debugging). Both are trained on domain-relevant tasks.

  • Versatile Formats

    Available Math Mini models are offered in 16-bit and GGUF (4-bit, 5-bit, 8-bit) on Hugging Face for broad compatibility. This allows users to choose formats best suited to their environment.

Performance Evaluation and Future Outlook

We are conducting performance tests for Math Mini against other relevant mini models like Phi-3-mini and TinyLlama, and will do the same for Code Mini upon its release. We plan to publish a comparative table with results in mathematical and code-related tasks, measuring aspects like accuracy, speed, and resource consumption.

Benchmark Results Coming Soon.

Our objective is for Math Mini and the upcoming Code Mini to offer competitive performance on specialized tasks, while maintaining efficiency and ease of use. We invite the community to test the available models, provide feedback, and share real-world use cases to aid in their continued development.

Potential Use Cases

  • Mathematical problem-solving support, from school exercises to more complex challenges (Math Mini - 0.6B & 1.7B available).
  • Programming assistance, including code generation, algorithm explanation, and debugging support (Code Mini - in development, mid-June target).
  • Integration into educational apps, e-learning platforms, and intelligent personal assistants.
  • Rapid prototyping of custom AI tools for startups and research projects.

Availability and Getting Started

Math Mini (0.6B and 1.7B versions) is now available on Hugging Face in 16-bit and GGUF (4-bit, 5-bit, 8-bit) formats. Code Mini and the 4B versions of Math Mini are currently in development. Code Mini is scheduled for release mid-June. You can download the available Math Mini models now, test them in your projects, and contribute feedback. Documentation includes usage examples and integration guides.

16-bit Versions (Math Mini 0.6B & 1.7B)

Suitable for servers and environments with moderate resources requiring high precision. (Math Mini 0.6B & 1.7B available now. Code Mini & Math Mini 4B in development)

GGUF Formats (Math Mini 0.6B & 1.7B)

Designed for personal devices, edge computing, and applications requiring high efficiency. Compatible with Llama.cpp and Ollama. (Math Mini 0.6B & 1.7B available now. Code Mini & Math Mini 4B in development)

Want to contribute? Our repository on Hugging Face (Enosis Labs) is open to suggestions, issues, and pull requests for the available models. Your participation can help improve these mini models!

Our Training Approach for Math Mini

Math Mini's capabilities are rooted in its training with the Enosis Labs Mathematics Reasoning Dataset. This is a curated collection of math problems with detailed, step-by-step solutions, covering arithmetic, algebra, geometry, calculus, and mathematical physics.

Each problem is structured in a conversational format, intended to train models that explain their reasoning clearly. The explanations follow a logical progression to encourage step-by-step analysis.

This dataset was developed by our team at Enosis Labs, incorporating data synthesized with the assistance of advanced models like Google Gemini and Deepseek, along with information from international competitions and educational resources. Its goal is to enhance mathematical reasoning and step-by-step explanation in AI models, making them useful for education, benchmarking, and as learning tools.

Available Math Mini Versions

Math Mini 0.6B

ID: enosislabs/math-mini-0.6b-preview-gguf

Public GGUF (4b/5b/8b) Available

Notes: Most compact version available

View on HF

Math Mini 0.6B

ID: enosislabs/math-mini-0.6b-preview-16bits

Public 16-bit Available

Notes: Most compact version available

View on HF

Math Mini 1.7B

ID: enosislabs/math-mini-1.7b-preview-gguf

Public GGUF (4b/5b/8b) Available

Notes: Stable and recommended

View on HF

Math Mini 1.7B

ID: enosislabs/math-mini-1.7b-preview-16bits

Public 16-bit Available

Notes: Stable and recommended

View on HF

Math Mini 4B

ID: enosislabs/math-mini-4b-gguf

Experimental GGUF (4b/5b/8b) In Development

Notes: Larger version, currently in development.

View on HF

Math Mini 4B

ID: enosislabs/math-mini-4b-16bits

Experimental 16-bit In Development

Notes: Larger version, currently in development.

View on HF

Public: Stable models recommended for general use.

Experimental: Indicates models that, once available, might be under active iteration or for specific testing purposes.

GGUF: Formats optimized for compatibility with tools like Llama.cpp and Ollama.

In Development: These models are currently under development and will be released once ready.

Explore Our Models and Get Involved

We invite you to try the available versions of Math Mini on Hugging Face and look forward to releasing Code Mini (mid-June) and the 4B versions of Math Mini once development is complete. Share your experiences and help us build more accessible and useful AI for everyone. Your feedback is valuable for the continued improvement and development of these models.