Tuesday, 02 January 2024 12:17 GMT

Global AI Governance Enters Crossroads


(MENAFN- The Arabian Post)

Governments, tech firms and international bodies are intensifying efforts to structure artificial intelligence oversight, with sharp divergence emerging between innovation-led and precaution-based models. The European Union has begun implementing core provisions of its AI Act, the United States has unveiled a sweeping federal strategy prioritising deregulation, and Asia-with China leading-has laid out a coordinated global governance agenda.

The EU's AI Act, formally effective since August 2024, is now activating its enforcement phases. From February 2025, systems deemed“unacceptable risk” have been barred, and as of August 2025, the rules for general-purpose AI models have begun to apply. Under this framework, companies must carry out conformity assessments, maintain documentation, and ensure governance oversight. The regulation's extraterritorial reach means that AI tools used inside the EU must comply regardless of where they originate. Complaints from creators' organisations-38 global groups issued a joint statement condemning the Act's shortcomings on copyright and transparency-underscore tensions in balancing innovation and rights protection.

Across the Atlantic, the U. S. has adopted a markedly different path. In July 2025, the White House rolled out“America's AI Action Plan,” a policy package aimed at accelerating AI development by removing regulatory barriers and linking federal funding to state-level adoption of lenient norms. This marks a shift from prior cautionary mandates to a pro-innovation posture. State-level regulation remains active: California passed Senate Bill 53, compelling large AI developers to publish safety protocols, report incidents within 15 days, and face fines up to $1 million-while exempting small startups to foster innovation. The U. S. Senate rejected a federal moratorium on state AI laws, keeping states in the regulatory driver's seat.

See also Huang Rejects Bubble Alarm, Bets on AI Infrastructure Surge

Asia is establishing its own rhythm. China's government released a 13-point action plan at the 2025 World AI Conference, aiming to coordinate domestic and global AI standards across development, security, ethics, and diplomacy. Meanwhile, the Framework Convention on Artificial Intelligence-drafted under the Council of Europe and endorsed by over 50 nations including EU member states-promotes binding commitments to uphold human rights, rule of law, and democratic standards in AI deployment.

Analytical research underscores the contrast. A comparative governance study published in April 2025 finds that the U. S. pursues a market-centric, low-constraint model; the EU adopts a risk-based regulatory regime; Asia focuses on state-guided strategy blending control with deployment. The report warns that such divergence complicates cross-border cooperation and risks policy fragmentation.

Industry response varies. Some AI firms argue the EU's rules are overly burdensome and inflexible, echoing criticism by the Capgemini CEO, who described them as stifling deployment. Others view transparency mandates and safety reporting requirements as essential for public trust. Audit and regulatory compliance services in AI governance are witnessing surging demand as firms scramble to meet new standards.

Beyond commerce and regulation, the security dimension looms large. International organisations and civil society groups are pressing for explicit limits on lethal autonomous weapons. The UN and NGOs have called for a global ban on AI-enabled killing machines by 2026, citing risks to accountability and violations of humanitarian law.

In parallel, a global“red lines” initiative spearheaded by Nobel laureates and AI researchers seeks to define forbidden AI actions at the international level-such as autonomous cyberattacks or biological weapon creation-to guide policy consensus.

See also Poland Authorises NATO Troops Under“Eastern Sentinel” Operation

Adoption patterns also reflect uneven capacity. The Anthropic Economic Index, published in mid-September 2025, shows stark disparities in frontier AI deployment: advanced enterprises in high-income regions dominate usage, while lower-income countries lag behind due to limited infrastructure and regulatory support.

The regulatory maps are evolving rapidly. The EU released a voluntary Code of Practice to help firms align with the AI Act's demands in areas such as transparency, ethics, and copyright. In parallel, new state laws in the U. S., such as New York's Stop Deepfakes Act, require provenance metadata for synthetic content. Meanwhile, international summits like the 2025 AI Action Summit in Paris, co-hosted by France and India, and efforts to coordinate treaties like the Framework Convention, are reshaping diplomatic engagement on AI.

Notice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don't hesitate to contact our editorial team at editor[at]thearabianpost[dot]com . We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity.

MENAFN12102025000152002308ID1110183949



Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.