
403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
From Delegation To Oversight: Building Ethical Maturity In AI Adoption
(MENAFN- Mid-East Info) Voices from EY Academy, CIPS and ACCA on the path to responsible AI
Contributors: Fazeela Gopalani, Partner, EY Academy; Sam Achampong, Regional Director, CIPS MENA; and Kush Ahuja, Head of Eurasia and Middle East, ACCA Artificial intelligence is reshaping how decisions are made across finance, supply chains and professional services. As algorithms take on more routine work, professionals are shifting from delegating to overseeing. That shift is not just technological, it is ethical. On Global Ethics Day 2025, three leaders in their respective domains reflect on what true ethical maturity in AI adoption looks like, and what stands between aspiration and accountability. The turning point: why ethics can't wait Recent EY research reveals a striking tension: while 72% of organisations say they have“integrated and scaled AI” across many or most of their initiatives, only about a third have the protocols in place to fully support responsible AI governance. In other words, adoption outpaces oversight. EY's own EY AI Sentiment Index tells a similar story. 70% of people say they use AI in their daily lives, yet fewer than half apply it at work; a sign that adoption isn't just a technical challenge, but an ethical and cultural one too. As more decisions shift to intelligent systems, the real test is whether organisations can keep human judgement at the heart of their AI strategies. And this sentiment crosses industries. In the finance profession, ACCA's AI Monitor series has shown that while automation can transform efficiency, it can never replace professional scepticism. The challenge lies in balancing speed with scrutiny, ensuring that AI augments (rather than undermines) trust. The same pattern is visible in procurement and supply chains. CIPS' AI in Procurement & Supply expert report warns that supplier scoring models, predictive analytics and opaque algorithms bring both power and peril. Across these professions, the conclusion is shared: AI can empower but only if humans stay firmly in the loop. The pivot from delegation to oversight is now urgent. Ethical maturity means moving beyond the naive assumption that 'AI just works' to embedding governance, human judgement and clear accountability, so that decisions made (or influenced) by machines remain subject to human values and scrutiny. Ethical maturity is a journey, not a switch Ethical maturity in AI isn't achieved through a single policy or training session. It's a journey. While every organisation will move at its own pace, the path often follows five broad stages: Awareness : recognising that AI introduces new ethical risks such as bias, opacity and data harms. Understanding : mapping how models, data sources and system design connect to real-world outcomes. Oversight : building governance, validation, audits and human-in-the-loop systems. Accountability : assigning responsibility, traceability and recourse when systems err. Advocacy & evolution : shaping policy, regulation and industry standards, embedding ethical AI into culture. These aren't just three organisations with different mandates, they're three vantage points on the same problem. EY Academy gives businesses the confidence and capability to bring ethics into the AI conversation from the start. CIPS shows how those principles play out where decisions hit the real world: in procurement, contracts and supply chains. ACCA ensures financial governance and accountability evolve alongside technology, protecting the trust that underpins markets. Together, they offer a practical roadmap for navigating the ethical complexity of AI. Fazeela Gopalani (EY Academy): educating oversight, not just adoption In many organisations, AI is introduced by technologists or data teams and only later do business leaders become involved. That sequence often obscures the ethical dimension. At EY Academy, the mission is to shift that sequence: training professionals to engage before deployment, not after. “Ethical maturity begins when professionals stop assuming AI is objective,” Fazeela explains.“Behind every system are human decisions, like what data to feed it, what to value most and what to ignore. If no one challenges those decisions, bias and unfairness can quietly become part of the system.” Practical steps include integrating ethical reflection into professional learning that encourages participants to analyse real-world scenarios, question underlying assumptions and explore the impact of AI decisions. Programmes also emphasise cross-functional dialogue, bringing together business and technical teams to surface potential risks early. Reflection is also a core skill. Learners are encouraged to pause and ask:“If I were the stakeholder in this dataset, would I be represented fairly? What assumptions went into the model? What would I demand if this were audited?” High ethical maturity demands that oversight is not an afterthought. It must be woven into design, and education is where that begins. Sam Achampong (CIPS): procurement's crucible for AI ethics Ethical maturity is equally critical in procurement, where the integration of AI is accelerating rapidly. CIPS' research shows that supplier sourcing, risk scoring, contract analytics and supply chain forecasting are increasingly AI-driven. In a recent global survey of procurement professionals, 73% said they were already using AI in some capacity, most commonly for contract monitoring, workflow automation and supplier risk assessments. Yet with those advances come risks. Automated scoring can unfairly exclude smaller suppliers. Predictive models can amplify existing biases. And opaque algorithms can make it impossible for suppliers to challenge decisions. Here, oversight is non-negotiable. “Procurement has always been about integrity, fairness and relationship,” Sam reflects.“When AI begins to measure and grade suppliers, humans must remain guardians who question whether data sources skew against certain groups, whether scores are explainable and whether recourse exists.” Practical steps include requiring explainability clauses in supplier algorithms, creating audit trails of key decision points, enabling suppliers to challenge automated outcomes and bias-testing models across geographies and supplier types. Ethical procurement leadership programmes are also helping raise awareness among buyers and contract managers. Because procurement sits at the intersection of data, relationships and operational decision-making, it's a natural crucible for embedding oversight into everyday processes, not bolting it on afterwards. Kush Ahuja (ACCA): governance, trust and professional accountability Those same ethical tensions echo across finance and accounting, where the stakes are equally high. Algorithms increasingly assist with forecasting, budgeting, fraud detection and audit sampling. But while technology may evolve, the responsibility still sits squarely with professionals. “AI doesn't change the principles of our profession (integrity, objectivity and accountability) but it does test how we apply them,” Kush says. The AI Monitor series from ACCA highlights five critical enablers of responsible AI: trust, governance, talent, risk & control and data strategy. A recent member survey showed that 70% of finance professionals believe AI can free up time for higher-value, judgement-driven work. But governance remains patchy with only a minority of teams fully embedding ethical checks and controls into their AI use. Practical steps include incorporating AI ethics modules into professional qualifications, encouraging firms to audit algorithms as part of their assurance work, and providing responsible AI assessment tools. Earlier this year, ACCA and EY published a joint piece of research on AI assurance, a landmark collaboration that underscored the shared responsibility of the profession in ensuring transparency, governance and trust in AI systems. The report highlighted many of the same issues explored in this article: embedding oversight, ensuring accountability and safeguarding the integrity of decision-making. It set the tone for a broader, cross-profession conversation on how finance, procurement and learning communities can act as ethical gatekeepers for AI. ACCA's collaboration with EY on AI assurance guidance earlier this year reinforces this push towards structured governance. Ethical maturity in finance is ultimately about stewardship. AI can streamline many processes, but professionals remain accountable for outcomes. The journey from delegation to oversight is fundamental to sustaining trust. Shared tensions and common guardrails Across every sector, the same tensions tend to surface when organisations scale their use of AI. One of the most persistent is the balance between efficiency and explainability. As models become more powerful, they often become less transparent. The question then isn't just how well a system performs but rather how much visibility organisations are willing to trade for speed or accuracy. Without explainability, trust becomes fragile. Another tension sits between autonomy and human control. AI systems can act faster than humans, but oversight remains essential. The ethical question is not whether humans should intervene, but when. Closely linked is the issue of speed versus due process. Real-time decision-making can make it harder to pause, investigate and apply proper checks. Yet ethical maturity demands that recourse mechanisms exist, no matter how fast the system moves. There is also a growing gap between innovation and regulation. Technology advances quickly, while policy and regulation often lag behind. In practice, this means responsible organisations can't simply wait for the rules to catch up, they should act ahead of them, setting internal guardrails before external mandates arrive. And finally, there's scale versus context. Global models may be trained to work across borders, but their assumptions can fail when applied to local realities. Ethical maturity means testing, tuning and questioning how technology behaves in different environments. Guardrails are what turn these tensions from risks into responsibilities. Human override points, periodic audits, escalation channels and formal accountability for harm are fast becoming the operational backbone of any organisation deploying AI at scale. Call to ethical maturity and collective action The path from delegation to oversight is the path to trust. Ethical maturity is what separates resilient, responsible organisations from brittle ones. Each of our experts here brings a different vantage point – learning, operations and governance – yet together they help businesses navigate the practical and moral complexities of deploying AI responsibly. True ethical maturity begins long before a system is switched on. It starts with asking the difficult questions: Who is accountable if the model fails? How transparent is the system? Who has the power to override its decisions? It continues with embedding explainability into design, building recourse pathways, and ensuring that oversight isn't dependent on good intentions but built into the infrastructure itself. This isn't a task for technologists alone. It demands cross-disciplinary teams, shared responsibility and proactive engagement with emerging regulation. And it calls for investment not only in technology, but in people: strengthening the skills that allow professionals to question, challenge and govern AI with confidence. Let this Global Ethics Day be a milestone not of rhetoric, but of resolve. AI is rewriting what we thought machines could do. But ethics must remain ours. The future will belong not to those who hand power to algorithms unquestioningly, but to those who build accountability and oversight into the machine from day one.
Contributors: Fazeela Gopalani, Partner, EY Academy; Sam Achampong, Regional Director, CIPS MENA; and Kush Ahuja, Head of Eurasia and Middle East, ACCA Artificial intelligence is reshaping how decisions are made across finance, supply chains and professional services. As algorithms take on more routine work, professionals are shifting from delegating to overseeing. That shift is not just technological, it is ethical. On Global Ethics Day 2025, three leaders in their respective domains reflect on what true ethical maturity in AI adoption looks like, and what stands between aspiration and accountability. The turning point: why ethics can't wait Recent EY research reveals a striking tension: while 72% of organisations say they have“integrated and scaled AI” across many or most of their initiatives, only about a third have the protocols in place to fully support responsible AI governance. In other words, adoption outpaces oversight. EY's own EY AI Sentiment Index tells a similar story. 70% of people say they use AI in their daily lives, yet fewer than half apply it at work; a sign that adoption isn't just a technical challenge, but an ethical and cultural one too. As more decisions shift to intelligent systems, the real test is whether organisations can keep human judgement at the heart of their AI strategies. And this sentiment crosses industries. In the finance profession, ACCA's AI Monitor series has shown that while automation can transform efficiency, it can never replace professional scepticism. The challenge lies in balancing speed with scrutiny, ensuring that AI augments (rather than undermines) trust. The same pattern is visible in procurement and supply chains. CIPS' AI in Procurement & Supply expert report warns that supplier scoring models, predictive analytics and opaque algorithms bring both power and peril. Across these professions, the conclusion is shared: AI can empower but only if humans stay firmly in the loop. The pivot from delegation to oversight is now urgent. Ethical maturity means moving beyond the naive assumption that 'AI just works' to embedding governance, human judgement and clear accountability, so that decisions made (or influenced) by machines remain subject to human values and scrutiny. Ethical maturity is a journey, not a switch Ethical maturity in AI isn't achieved through a single policy or training session. It's a journey. While every organisation will move at its own pace, the path often follows five broad stages: Awareness : recognising that AI introduces new ethical risks such as bias, opacity and data harms. Understanding : mapping how models, data sources and system design connect to real-world outcomes. Oversight : building governance, validation, audits and human-in-the-loop systems. Accountability : assigning responsibility, traceability and recourse when systems err. Advocacy & evolution : shaping policy, regulation and industry standards, embedding ethical AI into culture. These aren't just three organisations with different mandates, they're three vantage points on the same problem. EY Academy gives businesses the confidence and capability to bring ethics into the AI conversation from the start. CIPS shows how those principles play out where decisions hit the real world: in procurement, contracts and supply chains. ACCA ensures financial governance and accountability evolve alongside technology, protecting the trust that underpins markets. Together, they offer a practical roadmap for navigating the ethical complexity of AI. Fazeela Gopalani (EY Academy): educating oversight, not just adoption In many organisations, AI is introduced by technologists or data teams and only later do business leaders become involved. That sequence often obscures the ethical dimension. At EY Academy, the mission is to shift that sequence: training professionals to engage before deployment, not after. “Ethical maturity begins when professionals stop assuming AI is objective,” Fazeela explains.“Behind every system are human decisions, like what data to feed it, what to value most and what to ignore. If no one challenges those decisions, bias and unfairness can quietly become part of the system.” Practical steps include integrating ethical reflection into professional learning that encourages participants to analyse real-world scenarios, question underlying assumptions and explore the impact of AI decisions. Programmes also emphasise cross-functional dialogue, bringing together business and technical teams to surface potential risks early. Reflection is also a core skill. Learners are encouraged to pause and ask:“If I were the stakeholder in this dataset, would I be represented fairly? What assumptions went into the model? What would I demand if this were audited?” High ethical maturity demands that oversight is not an afterthought. It must be woven into design, and education is where that begins. Sam Achampong (CIPS): procurement's crucible for AI ethics Ethical maturity is equally critical in procurement, where the integration of AI is accelerating rapidly. CIPS' research shows that supplier sourcing, risk scoring, contract analytics and supply chain forecasting are increasingly AI-driven. In a recent global survey of procurement professionals, 73% said they were already using AI in some capacity, most commonly for contract monitoring, workflow automation and supplier risk assessments. Yet with those advances come risks. Automated scoring can unfairly exclude smaller suppliers. Predictive models can amplify existing biases. And opaque algorithms can make it impossible for suppliers to challenge decisions. Here, oversight is non-negotiable. “Procurement has always been about integrity, fairness and relationship,” Sam reflects.“When AI begins to measure and grade suppliers, humans must remain guardians who question whether data sources skew against certain groups, whether scores are explainable and whether recourse exists.” Practical steps include requiring explainability clauses in supplier algorithms, creating audit trails of key decision points, enabling suppliers to challenge automated outcomes and bias-testing models across geographies and supplier types. Ethical procurement leadership programmes are also helping raise awareness among buyers and contract managers. Because procurement sits at the intersection of data, relationships and operational decision-making, it's a natural crucible for embedding oversight into everyday processes, not bolting it on afterwards. Kush Ahuja (ACCA): governance, trust and professional accountability Those same ethical tensions echo across finance and accounting, where the stakes are equally high. Algorithms increasingly assist with forecasting, budgeting, fraud detection and audit sampling. But while technology may evolve, the responsibility still sits squarely with professionals. “AI doesn't change the principles of our profession (integrity, objectivity and accountability) but it does test how we apply them,” Kush says. The AI Monitor series from ACCA highlights five critical enablers of responsible AI: trust, governance, talent, risk & control and data strategy. A recent member survey showed that 70% of finance professionals believe AI can free up time for higher-value, judgement-driven work. But governance remains patchy with only a minority of teams fully embedding ethical checks and controls into their AI use. Practical steps include incorporating AI ethics modules into professional qualifications, encouraging firms to audit algorithms as part of their assurance work, and providing responsible AI assessment tools. Earlier this year, ACCA and EY published a joint piece of research on AI assurance, a landmark collaboration that underscored the shared responsibility of the profession in ensuring transparency, governance and trust in AI systems. The report highlighted many of the same issues explored in this article: embedding oversight, ensuring accountability and safeguarding the integrity of decision-making. It set the tone for a broader, cross-profession conversation on how finance, procurement and learning communities can act as ethical gatekeepers for AI. ACCA's collaboration with EY on AI assurance guidance earlier this year reinforces this push towards structured governance. Ethical maturity in finance is ultimately about stewardship. AI can streamline many processes, but professionals remain accountable for outcomes. The journey from delegation to oversight is fundamental to sustaining trust. Shared tensions and common guardrails Across every sector, the same tensions tend to surface when organisations scale their use of AI. One of the most persistent is the balance between efficiency and explainability. As models become more powerful, they often become less transparent. The question then isn't just how well a system performs but rather how much visibility organisations are willing to trade for speed or accuracy. Without explainability, trust becomes fragile. Another tension sits between autonomy and human control. AI systems can act faster than humans, but oversight remains essential. The ethical question is not whether humans should intervene, but when. Closely linked is the issue of speed versus due process. Real-time decision-making can make it harder to pause, investigate and apply proper checks. Yet ethical maturity demands that recourse mechanisms exist, no matter how fast the system moves. There is also a growing gap between innovation and regulation. Technology advances quickly, while policy and regulation often lag behind. In practice, this means responsible organisations can't simply wait for the rules to catch up, they should act ahead of them, setting internal guardrails before external mandates arrive. And finally, there's scale versus context. Global models may be trained to work across borders, but their assumptions can fail when applied to local realities. Ethical maturity means testing, tuning and questioning how technology behaves in different environments. Guardrails are what turn these tensions from risks into responsibilities. Human override points, periodic audits, escalation channels and formal accountability for harm are fast becoming the operational backbone of any organisation deploying AI at scale. Call to ethical maturity and collective action The path from delegation to oversight is the path to trust. Ethical maturity is what separates resilient, responsible organisations from brittle ones. Each of our experts here brings a different vantage point – learning, operations and governance – yet together they help businesses navigate the practical and moral complexities of deploying AI responsibly. True ethical maturity begins long before a system is switched on. It starts with asking the difficult questions: Who is accountable if the model fails? How transparent is the system? Who has the power to override its decisions? It continues with embedding explainability into design, building recourse pathways, and ensuring that oversight isn't dependent on good intentions but built into the infrastructure itself. This isn't a task for technologists alone. It demands cross-disciplinary teams, shared responsibility and proactive engagement with emerging regulation. And it calls for investment not only in technology, but in people: strengthening the skills that allow professionals to question, challenge and govern AI with confidence. Let this Global Ethics Day be a milestone not of rhetoric, but of resolve. AI is rewriting what we thought machines could do. But ethics must remain ours. The future will belong not to those who hand power to algorithms unquestioningly, but to those who build accountability and oversight into the machine from day one.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Bitmex Launches Uptober Carnival Featuring A $1,000,000 Prize Pool
- SPAYZ.Io White Paper Explores Opportunities, Challenges And Ambitions In Payments Industry
- Pendle Grows An Additional $318 Million TVL Just 4 Days After Plasma Launch
- Rome Launches Its Genesis NFT Collection“Imperia” On Magic Eden Launchpad
- Xone Chain Announces Ecosystem Evolution Following Sunflower Letter
- Casper (CSPR) Is Listed On Gate As Part Of Continued U.S. Market Expansion
Comments
No comment