Stability AI, a proponent of hands-free AI image generation, gets a $ 1 billion valuation

Stability AI, the company behind the popular text-to-image artificial intelligence program Stable Diffusion, has raised new funding that values ​​the company at around $ 1 billion (according to a report by Bloomberg citing a “person familiar with the subject”). It’s a significant validation of the company’s approach to AI development, which, unlike incumbents like OpenAI and Google, focuses on open source models that anyone can use unsupervised.

In a press release, Stability AI said it raised $ 101 million in a round led by Coatue, Lightspeed Venture Partners and O’Shaughnessy Ventures and that it will use the money to “accelerate the development of open AI models for images, language , audio, video, 3D and more, for consumer and business use cases globally. ”

Anyone can rely on the Stability AI code or use it without moderation

Stable Diffusion is one of the leading examples of text-to-image AI, which includes templates such as OpenAI’s DALL-E, Google’s Imagen, and Midjourney. However, Stability AI has differentiated its products by making its software open-source. This means that anyone can rely on the company’s code or even use it to power their own commercial offerings.

Stability AI offers its own commercial version of the model, called DreamStudio, and says it expects to generate revenue by developing this underlying infrastructure and customizing versions of the software for enterprise customers. The company is headquartered in London and has around 100 employees worldwide. He says he plans to expand it to about 300 employees over the next year. The company also produces open source versions of other large AI models, including a text generation system very similar to OpenAI’s GPT-3.

Coatue investor Sri Viswanath (who joins the Stability AI board as part of the deal) said it was this open source approach that set Stability AI apart from its rivals. “Stability AI’s commitment to open source is fundamental: by providing the wider audience with the tools to create and innovate, open source will activate the momentum behind AI capabilities,” said Viswanath Bloomberg.

However, the open source nature of Stability AI software means it’s also easy for users to create potentially harmful images, from non-consensual nudes to propaganda and disinformation. Other developers like OpenAI have taken a much more cautious approach to this technology, incorporating filters and tracking how people use its product. The Stability AI ideology, in comparison, is much more libertarian.

“Ultimately, it is people’s responsibility to know if they are ethical, moral and legal in the way they use this technology,” said company founder Emad Mostaque. The border in the month of September. “The bad things people create with it […] I think it will be a very, very small percentage of the total usage. “

In addition to malicious applications, there are open questions about the legal issues inherent in text-to-image models. All these systems are trained on data taken from the web, including copyrighted content; from blogs and websites of artists, to images from stock photography sites. Some individuals whose work has been used without their permission to form these systems have said they are interested in seeking legal action or compensation. These problems are likely to become even more acute as companies like Stability AI demonstrate their ability to turn the work of others into their own profit.

Leave a Reply

Your email address will not be published. Required fields are marked *