Small Models, Big Wins

Plus: Miro’s AI playbook & Samsung’s Fold 7 leak in today’s quick hits.

Here’s what’s on our plate:

  • 🛠️ How domain-tuned SLMs are out-punching giant LLMs.

  • ⚡ Miro’s “team intelligence” reveal, and Samsung’s Fold 7 specs leak.

  • ❓ Would you trade a 100-B-parameter model for a lean SLM?

  • 😂 Buff Doge SLM vs. Crying Cheems LLM—smaller model, bigger flex.

Let’s dive in. No floaties are needed…

Guidde—Create how-to video guides fast and easy with AI.

Tired of explaining the same thing over and over again to your colleagues? It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.

Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.

*This is sponsored content

The Laboratory

Why Small Language Models are winning over enterprises?

So far, the popular notion about artificial intelligence models has been that bigger is better. This belief has been delivered, and large language models form the backbone of the modern AI industry; companies are pouring billions of dollars into developing, training, and running LLMs. While this has been the prevailing thought, the challenges of developing LLMs, including energy costs, managing datasets, training models, and deploying them, have many wondering if LLMs are the right approach.

For enterprises seeking cost-effective and easily adoptable AI solutions, the answer may lie in small language models (SLMs). Small language models are increasingly gaining significant traction, especially within enterprise and regional markets, because they offer a practical, cost-efficient alternative to large language models. So, what exactly are SLMs? How do they differ from LLMs, and could they be the alternative enterprises have been waiting for? Let us take a closer look.

Understanding Small Language Models (SLMs)

While the name would suggest it, SLMs are not just a scaled-down version of LLMs that are used to power AI models like GPT-4 and Gemini. Rather, they are models that have been fine-tuned for specific industries, tasks, and operations to aid in improving workflows.

LLMs are trained and optimized for many general-purpose tasks, making them difficult to fine-tune due to their massive number of parameters. In contrast, SLMs, like DistilBERT, Gemma, Phi, GPT-4o mini, and Mistral, are built with precision and efficiency in mind. They require less computational power, cost significantly less to run, and deliver more business-relevant insights than their larger counterparts.

According to Jahan Ali, founder and CEO of MobileLive, “They are optimized to excel in specific domains, whether it’s finance, healthcare, or software development. This allows them to deliver more accurate, reliable results tailored to the unique needs of an organization.” And it is this specificity in their training data that has made SLMs so alluring to enterprises.

Advantages of Small Language Models

SLMs are often trained using techniques like knowledge distillation, where smaller models learn by mimicking larger ones. To fine-tune these models, domain-specific datasets and techniques are used. This makes SLM parameters range from a few million to several billion, whereas LLMs have hundreds of billions or even trillions of parameters. This makes these models ideal for agent-based AI, under which AI systems operate autonomously, making real-time decisions based on incoming data. Since SLMs are lightweight, fast, and highly specialized, they are better suited for agentic AI.

“SLMs align with the broader trend of Agentic AI by allowing autonomous decision-making at the edge. In a smart factory, for example, an AI agent could use an SLM to proactively detect equipment failures, adjust machine settings, or schedule maintenance — all without human intervention.” Shahid Ahmed, global EVP at NTT New Ventures and Innovation, told Forbes

Another appealing aspect of SLMs for enterprises is their lower computational complexity. They are capable of running on significantly less powerful hardware than a typical LLM, with some capable of running on edge devices, such as laptops, robots, and mobile phones. This makes them ideal for use on localized systems. Their smaller size also reduces the chances of hallucinations or incorrect responses and reduces delays when processing requests.

However, their specialized nature also limits their versatility, potentially restricting broader applicability.

Limitations and challenges of SLMs

Small language models provide numerous advantages over their larger counterparts; however, they come with their limitations.

Since SLMs are designed for specific domains or tasks, they lack the broader capabilities of LLMs across various topics. This further restricts their ability to capture complex contextual dependencies and nuanced language patterns.

Another challenge is that the effectiveness of an SLM depends on the quality of data used in training it. Customizing them to meet more specific enterprise requirements, then, requires not just domain-specific clean datasets, but also specialized expertise in data science and machine learning. SLMs are also difficult to scale under large-scale deployments.

However, these limitations have not stopped enterprises from hedging their bets on SLMs.

SLMs could be better for businesses

While big tech companies continue to pour billions into training their large language models, researchers have distilled smaller models, which are believed to make better sense when thinking about return on investment.

Small Language Models have been making headlines in India's startup news ecosystem. Ankush Sabharwal, founder and CEO of CoRover, a conversational AI platform, told Financial Express that,  “They [SLMs] can run directly on devices without needing cloud servers, protect sensitive data, and lower both cost and energy usage.”

CoRover launched BharatGPT Mini, a compact SLM, in December 2024. Within five months, the majority of the platform’s 18,000 users, including enterprises, researchers, and developers, have adopted the model. The company now expects a 60–70% increase in project volume over the next year, with a target of crossing one million implementations. Sabharwal notes that SLMs can provide reliable performance even in offline mode and offer a practical, cost-effective alternative to cloud-dependent LLMs.

Beyond India, companies including Apple, Microsoft, Meta, and Google have all released SLM models to encourage the adoption of AI by businesses that have concerns about the costs and computing power needed to run large language models. Meta's Llama series of small language models has seen widespread adoption across various industries.

Companies such as Goldman Sachs, AT&T, and Nomura Holdings reportedly use these models for tasks such as customer service, document review, and code generation.

So, while big tech is placing its bets on LLMs, the economics of AI development appear to be shifting in favor of SLMs as more and more enterprises are looking to achieve better business outcomes with smaller, cheaper models trained for their specific needs.

Balancing business value and technological advances

Recently, Meta hired four more OpenAI artificial intelligence researchers to push forward with its goal of “Superintelligence”. The move is aimed at fast-tracking work on artificial general intelligence - machines that can outthink humans - and helps create new cash flows from the Meta AI app, image-to-video ad tools, and smart glasses.

So, while big tech continues to chase LLMs, they may not always be the most viable solution for enterprises seeking fast deployment, lower costs, and domain-specific performance.

For many businesses, especially in emerging markets or highly regulated industries, SLMs offer a balanced path forward, delivering much of the value of LLMs at a fraction of the cost, energy use, and complexity.

This is not merely a technological divergence, but a shift in mindset. Companies no longer need to chase the largest or most powerful models; instead, they can invest in smarter, leaner AI systems tailored to their specific operations.

From offline functionality to edge deployment and greater data privacy, the appeal of SLMs lies in their accessibility and relevance to real-world tasks. At the same time, the limitations of SLMs, narrower scope, dependency on quality data, and scaling challenges, must be acknowledged.

But rather than replacing LLMs, SLMs are emerging as a complementary solution, carving a unique niche in the AI stack. As enterprises increasingly value ROI and customizability over brute force, SLMs are well-positioned to define the next chapter of applied AI. The future of AI, then, it seems, may not just be bigger, but smaller, smarter, and more targeted.

Build your store. Run your world.

Start your online business for free, then get 3 months for just $1. With Shopify, you don’t just build a website—you launch a whole brand.

Enjoy faster checkouts, AI-powered tools, and 99.99% uptime. Whether you’re shipping lemonade or scaling globally, Shopify grows with you. Trusted by millions in 170+ countries and powering 10% of US e-commerce, it’s your turn to shine!

Plus, you’ll have 24/7 support and unlimited storage as your business takes off.

*This is sponsored content

Quick Hits, No Fluff

Quick Trivia

❓ Which of the following is NOT an advantage enterprises usually cite for adopting Small Language Models (SLMs) over Large Language Models (LLMs)?

Login or Subscribe to participate in polls.

Meme of the Day

Rate this edition

What did you think of today's email?

Login or Subscribe to participate in polls.