Zoho Corporation to Leverage NVIDIA NeMo to Build LLMs


(MENAFN- Watermelon Communications)
Dubai—October 29, 2024—Zoho Corporation, a global technology company headquartered in Chennai, announced today that it will be leveraging the NVIDIA AI accelerated computing platform - which includes NVIDIA NeMo, part of NVIDIA AI Enterprise software – to build and deploy its large language models (LLMs) in its SaaS applications. Once the LLMs are built and deployed, they will be available to Zoho Corporation's 700,000+ customers across ManageEngine and Zohoglobally. Over the past year, the company has invested more than USD 10 million in NVIDIA's AI technology and GPUs, and plans to invest an additional USD 10 million in the coming year. The announcement was made during NVIDIA AI Summit in Mumbai.

Ramprakash Ramamoorthy, Director of AI at Zoho Corporation, commented, "Many LLMs on the market today are designed for consumer use, offering limited value for businesses. At Zoho, our mission is to develop LLMs tailored specifically for a wide range of business use cases. Owning our entire tech stack, with products spanning various business functions, allows us to integrate the essential element that makes AI truly effective: context.”

Zoho prioritises user privacy from the outset to create models that are compliant with privacy regulations from the ground up, rather than retrofitting them later. Its goal is to help businesses realize ROI swiftly and effectively by leveraging the full stack of NVIDIA AI software and accelerated computing to increase throughput and reduce latency.

Zoho has been building its own AI technology for over a decade and adding it contextually to its wide portfolio of over 100 products across its ManageEngine and Zoho divisions. Its approach to AI is multi-modal, geared towards deriving contextual intelligence that can help users make business decisions. The company is building narrow, small and medium language models, which are distinct from LLMs. This provides options for using different size models in order to provide better results across a variety of use cases. Relying on multiple models also means that businesses that do not have a large amount of data can still benefit from AI. Privacy is also a core tenet in Zoho's AI strategy, and its LLM models will not be trained on customer data.

“The ability to choose from a range of AI model sizes empowers businesses to tailor their AI solutions precisely to their needs, balancing performance with cost-effectiveness,” said Vishal Dhupar, Managing Director, Asia South at NVIDIA. “With NVIDIA’s AI software and accelerated computing platform, Zoho is building a broad range of models to help serve the diverse needs of its business customers.”

Through this collaboration, Zoho will be accelerating its LLMs on the NVIDIA accelerated computing platform with NVIDIA Hopper GPUs, using the NVIDIA NeMo end-to-end platform for developing custom generative AI—including LLMs, multimodal, vision, and speech AI. Additionally, Zoho is testing NVIDIA TensorRT-LLM to optimize its LLMs for deployment, and has already seen a 60% increase in throughput and 35% reduction in latency compared with a previously used open-source framework. The company is also accelerating other workloads like speech-to-text on NVIDIA accelerated computing infrastructure.


MENAFN30102024004053011591ID1108832763


Watermelon Communications

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.