Energy efficient, cost-effective and more secure models are set to rival “one-size-fits-all” counterparts.
By Isabel Al-Dhahir, Principal Analyst at GlobalData
As the initial hype surrounding generative AI (GenAI) continues to mellow, the market impact of small language models (SLMs) is set to soar. Benefitting from faster training times, lower carbon footprint, and improved security, SLMs could prove more attractive for enterprises compared to the large language models (LLMs) that have thus far dominated headlines. This is according to GlobalData, a leading provider of AI-powered market intelligence.
Isabel Al-Dhahir, Principal Analyst at GlobalData said: “SLMs come at a time when GenAI is becoming more mature. There is increasing pressure for competitors in this market to demonstrate tangible use cases for applications of the technology. GenAI providers such as Microsoft, Meta, and Google have all recently released SLMs.”
Below, Al-Dhahir explains exactly why SLMs are set to dominate in 2025.
1. SLMs are easier to adopt and more energy-efficient to train and deploy
Al-Dhahir: “SLMs use smaller and more focused datasets, which means that training can be done in weeks, depending on the use case, in contrast to the several months for LLMs. SLMs typically have fewer than 10 billion parameters, compared to up to a trillion for larger models. The use of focused datasets makes SLMs particularly well-suited to domain-specific functions and small-scale applications such as mobile applications, edge computing, and environments with limited compute resources. As training techniques improve, SLMs with fewer parameters are becoming more accurate and can have a much faster processing time. SLMs are also more resilient to cyber-attacks compared to larger models because their smaller datasets represent a smaller attack surface. This also simplifies the security process, as there is less data to protect, making it easier to identify and address vulnerabilities.”
2. SLMs are cheaper and greener than their larger counterparts
Al-Dhahir: “Much has been said on the environmental impact of some of the larger, one-size-fits-all models that have garnered the most headlines in recent years. SLMs on the other hand are less expensive and energy-intensive to run because they utilize far less computing power than an LLM and they do not require a massive investment in expensive infrastructure. With many organizations keeping a watchful eye on their own carbon footprint, many will look to SLMs with their own smaller carbon footprint. For their reduced size, they have a lower energy consumption, thus providing a lower-emission option to enterprises.”
3. SLMs meet regulatory requirements
Al-Dhahir: “Where LLMs have previously prompted concerns around copyright and reliability of data, SLMs present fewer legal risks regarding data handling and copyright issues because it is easier to obtain licences for training material. Also, they escape stringent obligations because they do not meet the computing threshold, reducing regulatory compliance costs. SLMs are safer as they can be operated locally, reducing the risk of data breaches, thus addressing the critical issue of data privacy. The fact that SLMs can be deployed on-site makes them suitable for institutions to have greater control over data usage, dataset ownership, and data security protocols.”
4. The AI market is still booming
Al-Dhahir: “SLMs are not meant to replace LLMs but rather complement them. There is still a lot of appetite for the capabilities of generative AI, and many organizations are still getting to grips with where it can serve them best. As competition in the AI market intensifies, companies are under increasing pressure to show a strong business case with demonstrable ROI. SLMs, with their suitability for industry-specific applications, offer easier scalability across diverse environments.”
The future of mass market SLMs
Al-Dhahir: “GenAI providers are turning their attention to SLMs. Players such as Microsoft, Meta, and Google have all recently released their own models. Microsoft for example announced the Phi-3 family of small language models, which can be utilized to create content for marketing or sales teams, such as product descriptions or social media posts. It can also be used to develop customer support chatbots, presenting a strong enterprise use case. Elsewhere, Mistral has published one of its models under the Apache 2.0 license, an open source license that gives users a lot of freedom to use, modify, and distribute software. This makes it simple for software developers to tailor the model to specific enterprise use cases.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW
Check us out on YouTube!
Leave a Reply