Cerebras Systems introduces new AI development tool to challenge Nvidia dominance


(MENAFN) Startup Cerebras Systems has unveiled a new tool aimed at artificial intelligence developers, providing them access to its unique large-scale chips, which offer a potentially more cost-effective alternative to Nvidia's widely used GPUs. Nvidia's graphics processing units (GPUs) have become the industry standard for training and deploying large language models that underpin AI applications like OpenAI’s ChatGPT. However, accessing Nvidia GPUs, often via cloud computing services, can be both challenging and costly, especially for AI developers working on inference tasks—the process of deploying trained AI models for real-world use. Cerebras Systems aims to address this challenge by offering what it claims is a superior performance at a significantly lower cost. "We deliver performance that is not possible with GPUs. At the highest resolution, at the lowest price," according to Andrew Feldman, CEO of Cerebras.

The inference market in artificial intelligence is projected to experience rapid growth, potentially reaching a valuation in the tens of billions of dollars as both consumers and businesses increasingly adopt AI tools. Recognizing this opportunity, Cerebras, headquartered in Sunnyvale, California, has plans to launch a range of inference products. These will be available through a major developer and the company's own cloud computing service, offering flexible options to cater to different customer needs. For clients who prefer to manage their own data centers, Cerebras will also sell its AI systems directly. The company's innovative chips, known as "Wafer Scale Engines," are designed to overcome one of the significant challenges in AI data analysis. Traditional GPUs often struggle with handling massive datasets needed for large language models, requiring hundreds or thousands of interconnected chips. By contrast, Cerebras’ chip design allows for larger datasets to be processed more efficiently on a single chip.

Cerebras' unique approach with its Wafer Scale Engines allows for faster processing speeds, a critical advantage in the competitive AI space. According to Feldman, the architecture of Cerebras’ chips circumvents the limitations that traditional GPUs face, enabling more rapid and efficient data analysis. This innovation positions Cerebras as a formidable competitor to Nvidia, especially in a market that is hungry for more affordable and accessible AI development tools. As the demand for powerful AI solutions continues to rise, Cerebras' strategy of offering high-performance, cost-effective alternatives could reshape the landscape of AI hardware and influence how developers approach AI model deployment and scaling. 

MENAFN28082024000045015682ID1108609593


MENAFN

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.