← Back to Briefing
Leading Tech Firms Advance LLM Development with New Models and Research
Importance: 92/1004 Sources
Why It Matters
These announcements showcase the rapid pace of innovation in large language models, with major players contributing to both cutting-edge open-source models, robust enterprise solutions, and foundational research to improve AI reasoning capabilities.
Key Intelligence
- ■Xiaomi open-sourced MiMo-V2.5-Pro, a 1.02 trillion-parameter Mixture-of-Experts (MoE) model under an MIT license, demonstrating frontier coding performance and 40-60% token efficiency per agent run, despite its high resource demands.
- ■IBM introduced its new Granite 4.1 family of large language models, indicating continued development for enterprise applications.
- ■Apple Machine Learning Research unveiled LaDiR, a novel technique using latent diffusion to enhance the text reasoning capabilities of large language models.
- ■These advancements highlight ongoing innovation across open-source contributions, enterprise solutions, and fundamental research to improve AI reasoning.
Source Coverage
AIModels.fyi
4/29/2026Xiaomi just open-sourced a 1T-parameter model and almost nobody noticed
Huggingface Blog
4/29/2026Granite 4.1 LLMs: How They’re Built
Google News - Research
4/29/2026LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning - Apple Machine Learning Research
Google News - AI & Models
4/29/2026