'Polyanna Policy' Is NZ's Framework For AI Use In Government Overly Optimistic?
The report described a system where commercial incentives drive behaviour and oversight is treated as a nuisance. In Aotearoa New Zealand, there is a similarly urgent question: can the governance frameworks we are building to manage that technology be trusted to work?
This is particularly relevant to the public service and government agencies, now being encouraged to embrace AI. At a recent International Research Society for Public Management conference, the global research community grappled with how AI can align with the public interest.
A clear divergence is emerging. Some jurisdictions are building surveillance-heavy data systems, while others are constructing robust, binding systems to protect citizen consent.
Aotearoa New Zealand occupies a precarious middle ground. The Public Service AI Framework names the right principles: transparency, fairness and human oversight. But it is explicitly non-binding.
We have dubbed it a“Pollyanna policy” – based on the Pollyanna principle which describes a general bias towards positivity and optimism about outcomes.
AI and institutional complexityIn the area of AI governance, this becomes a matter of stating good intentions and issuing non-binding guidance, trusting existing frameworks will absorb genuinely novel challenges.
We argue this underestimates the institutional constraints, conflicting incentives and strategic vulnerability of that middle ground, without legislative armour to protect citizen data.
It also underestimates the“institutional friction” that defines modern public institutions where many people and departments have power over policy. This tends to weaken responsiveness to problems.
Governance in a typical public sector agency is not a clean, ordered structure. It is an accumulation of layer upon layer of policy, operational procedure, ministerial expectation, legislative obligation and professional conventions.
New regulatory instruments rarely replace old ones. They are added alongside them, often interacting in unpredictable ways.
AI is a“flat” technology that processes information as a statistical landscape. It lacks the institutional memory to know that a prompt today might quietly undermine the kinds of political and constitutional compromises, made over time, that are central to effective government.
The accountability gapBefore AI can be usefully deployed, agencies must do the diagnostic work of understanding what that governance environment actually is.
The instinct instead is to add further AI governance guidance to a system already straining under accumulated advice.
As Australia's Royal Commission into the Robodebt Scheme demonstrated, algorithmic systems deployed without this kind of clarity can produce catastrophic harm.
The non-binding nature of the Public Service AI Framework abdicates central responsibility, offloading accountability to individual agencies with vastly different levels of capability.
Algorithmic decision-making disrupts traditional accountability within organisations because information, justification and consequences can no longer be traced through a single responsible chain.
The framework assumes organisational readiness, but the evidence does not support this. The 2025 Public Service Census found that while a third of public servants had used AI for work, only 14% used it regularly.
This gap is driven by the friction that arises when a general-purpose tool like AI meets the accumulated complexity of how decisions actually get made.
Don't get us wrong: the framework names the right principles. But principles without a legislative mandate become aspirational without accountability. In an already strained system, another non-binding document changes very little.
Māori data sovereigntyIn Aotearoa New Zealand, this governance vacuum has an added dimension. Māori data sovereignty is a constitutional imperative under the Treaty of Waitangi, not a technical add-on.
The bureaucracy has an obligation to protect Indigenous sovereignty, yet the current approach leaves the gate unguarded. As legal scholars Woodrow Hartzog and Jessica Silbey argue in their 2025 article“How AI Destroys Institutions”:
<This risk is amplified by the current state of AI technology.“Hallucinations” are not bugs but inherent statistical features of large language models. And evidence confirms they remain a risk in high-stakes settings such as medicine.
The Five Eyes security agencies evidently agree: their April 2026 joint guidance on agentic AI calls explicitly for incremental deployment, continuous threat assessment and sustained human oversight.
When these models hallucinate legal facts they risk overwriting Indigenous knowledge with plausible fictions.
This is compounded by the“sycophantic” tendency of large language models to mirror the user's bias. In a policy system grappling with the legacy of colonisation, this can simply reinforce an echo chamber.
Building a counterweightUsed well, AI does present opportunities. It can reveal inconsistencies in policy frameworks and interrogate inherited assumptions.
But realising that opportunity requires that organisations understand themselves well enough to know where the technology will fail.
If these AI tools are built by companies where incentives outpace ethics, the burden on public institutions is to act as a genuine counterweight.
This means moving beyond“responsible” adoption toward creating formal, protected roles where officials interrogate AI output for bias and fabrication, rather than accepting speed as a proxy for quality.
The strategy, standards and guidance documents point in the right direction. But in the gap between aspiration and accountability, we must ask whether we continue to rely on optimism, or will we build the strong, ethical oversight capable of catching what the technology cannot?
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment