Tuesday, 02 January 2024 12:17 GMT

'Embrace Everything AI' Is The Blueprint For Enterprises In 2025


(MENAFN- Khaleej Times)

Generative AI shifts from experimentation to implementation

Adoption rates of different models among individuals and enterprises will continue to skyrocket, with more applications to be realised in 2025. According to a recent McKinsey Global Survey, 71% of respondents reported that their organisations were regularly using generative AI, up from 65% reported a year prior. That's no surprise given the promise of improved efficiencies, reduced operational costs, and significant gains in productivity that AI offers enterprises.

Recommended For You

In the year ahead, we anticipate that number will increase even faster, as enterprises look to integrate AI into their core operations, enabling them to automate resource-heavy, time-consuming tasks such as data analysis, customer service, and content creation​. However, organisations will need to navigate the many challenges associated with the adoption of large-scale AI models, including cybersecurity, compliance, and increased, energy-intensive computational demand.

The AI reality is hybrid

This shift towards AI integration at scale will bring to the forefront the urgent need for AI-optimised IT infrastructure and data management. To make the most out their AI, which is an inherently data intensive and hybrid workload, more organisations will need to invest in native AI systems that optimise everything across the AI lifecycle, regardless of whether the workload is on-premises, in a colocation facility, the public cloud, or at the edge.

A hybrid cloud strategy will no longer be just an option, but a prevailing operating model of choice because it's ideal for unlocking the value of organisational data and accelerating AI deployment. A hybrid-by-design operation model, rather than a hybrid by accident model where hybrid model planning is an afterthought, will be key to success.

Furthermore, robust and efficient hybrid cloud infrastructure that has been designed for AI will enable organisations to have better data visibility, enhanced control and protection, and streamlined data management across environments. This also helps them mitigate unplanned costs caused by unexpected challenges around operational complexities, security risks and inefficient use of resources.

Increasing investment into dedicated AI infrastructure

Throughout the past few months we've seen various countries, including the UAE, announcing substantial first investments into IT infrastructure geared towards AI, and specifically its compute needs. This trend will only grow as more use cases and applications are developed. According to research, global power demand from data centres will increase 50% by 2027 and by as much as 165% by 2030. Traditional infrastructure will struggle to keep pace, thus forcing a shift towards high-density compute solutions. As companies navigate the future of data-driven innovation, they will require reliable and robust data centres to handle the intense compute demands of AI.

However, these organisations will learn very quickly that, they will have to look beyond data centres and invest in a robust IT infrastructure across their business, to be able to fully leverage AI. As the adoption of AI will shape how organisations manage and access data, many will have to reassess their existing infrastructure to ensure their IT environment is capable of handling AI workloads seamlessly. This might entail the unification of multi-gen IT environments and may necessitate a move to a hybrid-by-design approach, including AI-specific hardware, software, and data management solutions.

AI projects won't deliver value if an organisation isn't able to access relevant data to train and run their models. However, the enormous volumes of data required to operate AI models must be stored and transmitted quickly and efficiently, pushing organisations to modernise their networking and storage infrastructure.

​As a result, we will see greater investments into AI infrastructure, particularly across the Middle East region, as the region continues to embrace digital transformation.

AI sustainability that goes beyond equipment efficiency

Improving AI sustainability is a practical imperative that is increasingly motivating industry, policymakers, and end users to act swiftly. This has more practical than sentimental reasons: improving AI efficiency is in everyone's interest, as consuming less energy and water reduces operating costs and the impact on the environment. In line with this we will see more organisations opting for sustainable and efficient IT solutions, especially as they are looking to scale AI. This includes advancements in chip performance and cooling architectures, such as 100% direct liquid cooling which can reduce power consumption by 90%.

However, we see more and more organisations that understand that this is not a one-and-done step in an IT sustainability exercise. It must occur at every stage in the AI lifecycle, from data selection to model design, training, tuning and inferencing – through to the equipment's end of life. Although much attention has understandably been given to advancements in AI infrastructure and equipment efficiency, they look at the whole AI lifecycle to improve software, data, energy, and resource efficiency.

These organisations are implementing the measurement and monitoring tools necessary to track detailed performance metrics. Surveys reveal that only 44% of enterprises actually monitor AI-related energy use – leaving environmental savings on the table. For example, by coupling granular monitoring with holistic thinking researchers were able to identify opportunities to reduce emissions by up to 80% simply by adjusting the time that models are trained to align with windows where renewable energy is more plentiful.

What does this mean for enterprises?

Going forward, embrace everything AI isn't just an option; it's essential. From rapid integration of advanced technologies into core operations to the surge in dedicated data centers, the landscape is transforming. Meeting the escalating compute power demands requires a shift towards high-density, efficient solutions and intelligent management. Standardized operational practices will be crucial, streamlining technology lifecycles and maximizing investments. As AI is increasingly integrated into various facet of enterprise, robust, adaptable infrastructure will become the cornerstone of innovation and success.

Mohammad Al Jallad is Chief Technologist and Director – Sovereign AI & AI Factory, HPC & AI Global Sales at Hewlett Packard Enterprise.

MENAFN07072025000049011007ID1109770232



Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.