IBM has recently introduced the Granite 4.0 family of small language models, marking a significant advancement in the field of artificial intelligence. These models aim to combine speed and efficiency while minimizing operational costs compared to larger models, all without sacrificing accuracy. One standout feature of Granite 4.0 is its innovative hybrid Mamba/transformer architecture, which dramatically lowers memory requirements, allowing the system to run on more affordable GPUs.
As IBM explains:
LLMs’ GPU memory requirements are often reported in terms of how much RAM is needed just to load up model weights. But many enterprise use cases—especially those involving large-scale deployment, agentic AI in complex environments, or RAG systems—entail lengthy context, batch inferencing of several concurrent model instances at once, or both.
IBM asserts that Granite can lead to over a 70% reduction in RAM usage, making it suitable for handling long inputs as well as multiple concurrent batch processes. Even with increased context length or batch size, inference speed is expected to remain impressive. Furthermore, Granite is said to maintain competitive accuracy against larger models, especially excelling in benchmarks that involve instruction following and function calling.
The enhancement in performance stems from Granite’s unique architecture which merges a limited number of traditional transformer-style attention layers with a majority of Mamba layers, specifically Mamba-2. This structure sets Granite apart by achieving linear scaling regarding context length for Mamba components, contrasting with the quadratic scaling seen in conventional transformers. The inclusion of transformer attention layers ensures that local contextual dependencies are preserved, crucial for applications requiring in-context learning or few-shot prompting.
Granite is designed as a mixture-of-experts system, allowing only a subset of the model’s weights to be engaged during any forward pass. This approach additionally contributes to lowering the overall inference costs, making Granite an attractive option for various applications.
The Granite 4.0 family comprises three distinct model variants—Micro, Tiny, and Small—to address different industry needs. For instance, the Micro model features 3 billion parameters and is focused on high-volume, low-complexity tasks where speed and cost efficiency are of utmost importance, such as in RAG (Retrieval-Augmented Generation), summarization, text extraction, and classification.
On the other spectrum, the Small variant boasts a total of 32 billion parameters (with 9 billion active) and targets enterprise needs that call for robust performance without the hefty costs associated with frontier models. This makes it ideal for applications ranging from multi-tool agents to customer support automation. Lastly, the Graphite Nano model is specifically designed for edge devices that have limited connectivity and computational resources, featuring just 0.3 billion parameters.
Recent empirical research into Mamba-based language models emphasizes the potential advantages of Mamba-2 hybrid architectures over traditional transformers. The findings reveal:
Our primary goal is to provide a rigorous apples-to-apples comparison between Mamba, Mamba-2, Mamba-2-Hybrid (containing Mamba-2, attention, and MLP layers), and Transformers for 8B-parameter models trained on up to 3.5T tokens, with the same hyperparameters… Our results show that while pure SSM-based models match or exceed Transformers on many tasks, both Mamba and Mamba-2 models lag behind Transformer models on tasks that require strong copying or in-context learning abilities (e.g., five-shot MMLU, Phonebook Lookup) or long-context reasoning. In contrast, we find that the 8B-parameter Mamba-2-Hybrid exceeds the 8B-parameter Transformer on all 12 standard tasks we evaluated (+2.65 points on average) and is predicted to be up to 8× faster when generating tokens at inference time.
In a bid to promote accessibility and community development, IBM has open-sourced the Granite 4.0 models under the Apache 2.0 license. This is a departure from Meta’s LLaMa licensing, which has raised questions within the open-source community regarding its true accessibility. Notably, the Llama 4 Community License Agreement stipulates that rights do not extend to individuals or companies based in the EU.
For those looking to experiment with Granite, the models are readily available on platforms like Hugging Face and watsonx.ai. An interactive online playground provides users with the opportunity to try out the models firsthand. IBM also offers comprehensive cookbooks aimed at fine-tuning Granite models, along with a practical example demonstrating Granite’s application in contract analysis through Google Colab.
Furthermore, IBM has received accredited certification under the ISO/IEC 42001:2023 standard for the AI Management System (AIMS) of Granite. This standard is critical in addressing ethical considerations, transparency, and continuous learning challenges presented by AI systems, ensuring a structured approach to managing risks and opportunities within the technology.
Inspired by: Source

