(MENAFN- Hill & Knowlton) Dubai, United Arab Emirates, 12 December: At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), announced four new innovations for Amazon SageMaker AI to help customers get started faster with popular publicly available models, maximize training efficiency, lower costs, and use their preferred tools to accelerate generative artificial intelligence (AI) model development. Amazon SageMaker AI is an end-to-end service used by hundreds of thousands of customers to help build, train, and deploy AI models for any use case with fully managed infrastructure, tools, and workflows.
• Three powerful new additions to Amazon SageMaker HyperPod make it easier for customers to quickly get started with training some of today’s most popular publicly available models, save weeks of model training time with flexible training plans, and maximize compute resource utilization to reduce costs by up to 40%.
• SageMaker customers can now easily and securely discover, deploy, and use fully managed generative AI and machine learning (ML) development applications from AWS partners, such as Comet, Deepchecks, Fiddler AI, and Lakera, directly in SageMaker, giving them the flexibility to choose the tools that work best for them.
• Articul8, Commonwealth Bank of Australia, Fidelity, Hippocratic AI, Luma AI, NatWest, NinjaTech AI, OpenBabylon, Perplexity, Ping Identity, Salesforce, and Thomson Reuters are among the customers using new SageMaker capabilities to accelerate generative AI model development.
“AWS launched Amazon SageMaker seven years ago to simplify the process of building, training, and deploying AI models, so organizations of all sizes could access and scale their use of AI and ML,” said Dr. Baskar Sridharan, vice president of AI/ML Services and Infrastructure at AWS. “With the rise of generative AI, SageMaker continues to innovate at a rapid pace and has already launched more than 140 capabilities since 2023 to help customers like Intuit, Perplexity, and Rocket Mortgage build foundation models faster. With today’s announcements, we’re offering customers the most performant and cost-efficient model development infrastructure possible to help them accelerate the pace at which they deploy generative AI workloads into production.”
SageMaker HyperPod: The infrastructure of choice to train generative AI models
With the advent of generative AI, the process of building, training, and deploying ML models has become significantly more difficult, requiring deep AI expertise, access to massive amounts of data, and the creation and management of large clusters of compute. This is why AWS created SageMaker HyperPod, which helps customers efficiently scale generative AI model development across thousands of AI accelerators, reducing time to train foundation models by up to 40%. Leading startups such as Writer, Luma AI, and Perplexity, and large enterprises such as Thomson Reuters and Salesforce, are accelerating model development thanks to SageMaker HyperPod.
Now, even more organizations want to fine-tune popular publicly available models or train their own specialized models to transform their businesses and applications with generative AI. That is why SageMaker HyperPod continues to innovate to make it easier, faster, and more cost-efficient for customers to build, train, and deploy these models at scale with new innovations, including:
• New recipes help customers get started faster: Many customers want to take advantage of popular publicly available models, like Llama and Mistral, that can be customized to a specific use case using their organization’s data. To help customers get started in minutes, SageMaker HyperPod now provides access to more than 30 curated model training recipes for some of today’s most popular publicly available models, including Llama 3.2 90B, Llama 3.1 405B, and Mistral 8x22B. These recipes greatly simplify the process of getting started for customers, automatically loading training datasets, applying distributed training techniques, and configuring the system for efficient checkpointing and recovery from infrastructure failures.
• Flexible training plans make it easy to meet training timelines and budgets: While infrastructure innovations help drive down costs and allow customers to train models more efficiently, customers must still plan and manage the compute capacity required to complete their training tasks on time and within budget. That is why AWS is launching flexible training plans for SageMaker HyperPod. In a few clicks, customers can specify desired completion date, and maximum amount of compute resources they need. SageMaker HyperPod then automatically reserves capacity, sets up clusters, and creates model training jobs, saving teams weeks of model training time.
• Task governance maximizes accelerator utilization: Increasingly, organizations are provisioning large amounts of accelerated compute capacity for model training. These compute resources involved are expensive and limited, so customers need a way to govern usage to ensure their compute resources are prioritized for the most critical model development tasks, including avoiding any wastage or underutilization. Without proper controls over task prioritization and resource allocation, some projects end up stalling due to lack of resources, while others leave resources underutilized.
Accelerate model development and deployment using popular AI apps from AWS Partners within SageMaker
Many customers use best-in-class generative AI and ML model development tools alongside SageMaker AI to conduct specialized tasks, like tracking and managing experiments, evaluating model quality, monitoring performance, and securing an AI application. However, integrating popular AI applications into a team’s workflow is a time-consuming, multi-step process. This includes searching for the right solution, performing security and compliance evaluations, monitoring data access across multiple tools, provisioning and managing the necessary infrastructure, building data integrations, and verifying adherence to governance requirements. Now, AWS is making it easier for customers to combine the power of specialized AI apps with the managed capabilities and security of Amazon SageMaker.
SageMaker is the first service to offer a curated set of fully managed and secure partner applications for a range of generative AI and ML development tasks. This gives customers even greater flexibility and control when building, training, and deploying models, while reducing the time to onboard AI apps from months to weeks. To get started, customers simply browse the Amazon SageMaker Partner AI apps catalog, learning about the features, user experience, and pricing of the apps they want to use. They can then easily select and deploy the applications, managing access for the entire team using AWS Identity and Access Management (IAM).
MENAFN15122024007469016123ID1108993822
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.