Stability AI—the parent company of Stable Diffusion—released StableLM, a chatbot designed to be “light” yet effective. Following the success of its open-source image generation algorithm, Stability AI intends on expanding the scope of open-source AI tools to democratize artificial intelligence and open up public participation in the ongoing AI revolution. StableLM is based on a language model that relies on an experimental dataset to carry out key functions. Capable of both coding and holding conversations with users, StableLM was released to the public in the third week of April 2023 and is making a mark in the growing list of ChatGPT alternatives. StableLM is currently available on the open-source Hugging Face platform. The parent firm’s Stable Diffusion models received widespread acclaim for being potent competitors to other image-generation algorithms like Midjourney and Dall-E. Stability AI’s foray into the language model space indicates a growing shift in intent when it comes to developing AI and providing widespread access to advanced machine learning models

StableLM comes at a time when larger corporations are engaged in stiff competition to develop the most popular LLMs for both public and corporate clients. With ChatGPT integrating Bing and Google planning on enjoining Bard’s capabilities with its search feature, the development of language models and advanced sequencing capabilities has implications even for the internet at large. While StableLM is not on par with highly extensive models such as GPT-4, the ability to tweak and build upon open-source technologies gives it immense potential.

Looking beyond Stable Diffusion: StableLM and Its Parameters

A hologram of a human brain and an icon titled “AI” projected between two robotic hands

StableLM is built to be a lean and efficient language model that can be scaled to fit multiple applications.

StableLM’s alpha model became available on the Hugging Face and Github platforms in two versions with 3 and 7 billion parameters, respectively. The model is built to support both commercial and research applications. Future versions of the platform will be based on 15 and 65 billion parameters. While StableLM has fewer parameters when compared to ChatGPT and its other competitors, its creators aim to augment its performance to compete with larger language models. By strategically placing language models and chatbots at the center of the artificial intelligence movement, Stability AI aims to bring in contributions from all quarters to its existing language model. The company scaled many heights with its image generation model named Stable Diffusion. The outputs created shockwaves in the artistic world, with concerns surrounding AI-generated content and the threats it poses to art. Stability AI has considerable experience in AI and has created algorithms for several applications that include audio, video, and simulated content. 

StableLM is based on an experimental data set based on the company’s language model called The Pile. The current model contains over 1.5 trillion tokens and is trained on data sourced from pages on websites like Wikipedia, PubMed, and YouTube. While no detailed technical report on the model’s complete functionalities is available, the code for them is made available for developers on both GitHub and Hugging Face, provided they credit Stability AI and indicate any changes made to the code. The code allows the algorithm to be adapted to a range of purposes and can be trained seamlessly despite its comparatively small parameter size. Stability AI is relying on its prior experience with earlier open-source models such as GPT-J, GPT-NeoX, and the Pythia Suite, which were all trained on the parameters contained in The Pile. Newer language models training on the same base model include Cerebras-GPT and Dolly-2. These developments clearly demonstrate considerable demand and a bright future for open-source LLMs and generative AI.

How Does Stability AI’s Language Model Match Up?

A robot using a laptop keyboard

StableLM will have to be fine-tuned to further improve upon existing capabilities.

Though Stability AI built StableLM to be an efficient yet lightweight model, it still has a few ways to go before it becomes a potent ChatGPT alternative. Initial interactions with users have shown that the chatbot version of StableLM hosted on Hugging Face is rather slow to respond and even provides unrelated answers to user queries. Given that the model is based on much fewer parameters and data tokens when compared to larger counterparts, StableLM tends to extrapolate information beyond its training data set. This renders the chatbot prone to AI hallucination and bias, which makes the shortcomings a serious concern for its developers. However, these issues might be progressively fixed, given that expertise from analytics, user contributions from open-source architectures, and larger data sets will be added to the model eventually. StableLM’s security features and guardrails might also require further work since the model currently operates only the alpha version. Moreover, StableLM’s model with 7 billion parameters seems to perform better on average when compared to its smaller counterpart. Currently, Stability AI’s language models are capable of providing both text and code-based responses to its users, making way for greater applications in its future iterations.

As several companies across the world begin creating language model artificial intelligence to achieve several technical and commercial ends, Stability AI intends on providing a ChatGPT alternative that remains accessible to all users. Though StableLM has a lot of improvement to do if it intends on becoming a true competitor to GPT-4, Bard, or even alternative chatbots like Claude, the open-source AI movement will prove to be a potent element in achieving technological equity. Transparency remains central to the processes of StableLM, and its creators intend on making it as successful as Stable Diffusion by promoting inspection and fixes from prospective developers.

StableLM, Stable Diffusion AI, and the Future of Open Source LLMs

A vector depicting communication between a robot and a person

Open source artificial intelligence will be key to the proliferation of language model technologies.

Open-source AI technologies are important for the soon-approaching technological shift as they provide much-needed access to up-and-coming experimental tools. Open-source software revolutionized the information age and paved the way for the ubiquitous role of computer programs and the internet in modern human life. As artificial intelligence is slowly scaled to meet numerous requirements, open-source alternatives are bound to take on their private counterparts in creating an AI-aware society. As demands to pause AI development abound, open-source ChatGPT alternatives such as StableLM and Stable Diffusion models can mitigate fears and concerns surrounding transparency. By roping in the average developer, responsible AI will become a combined effort, ensuring crucial fields like education and healthcare are safeguarded from the shortcomings of artificial intelligence. Students and career academics will need to closely watch up-and-coming open-source AI alternatives, so they can actively participate in the ongoing AI revolution.