top of page

How to build an ethical AI governance framework in your health organisation

Updated: Oct 13

ree


The promise of AI and why governance matters


The potential of AI in healthcare is real: faster triage, earlier diagnoses, better use of stretched resources. But so are the risks.


When used without care, AI can embed bias, reduce transparency, or even compromise patient trust. That's why good governance is not a barrier to innovation, it's the foundation of it.


The challenge? Most global frameworks don't reflect the realities of the New Zealand health system. This is especially true when it comes to Te Tiriti obligations, Māori data sovereignty, and equity outcomes.


That's why local examples like Waitematā’s AI Governance Framework are so important. They show how ethical, fit-for-purpose oversight can be built right here, at a scale that works.


What is an AI governance framework?


In simple terms, AI governance is: structured oversight + ethical checks + ongoing accountability.


It's not the same as IT governance or project sign-off. It goes deeper, considering things like:

  • Long-term effects of automated decisions

  • Whether an algorithm is fit for a local population

  • How culturally safe, explainable, and transparent the system is


In health, the stakes are high. AI tools may help allocate appointments, suggest diagnoses, or prioritise interventions. If something goes wrong, especially in underserved communities, the consequences can be serious.


Governance helps you ask the right questions before problems arise.

A local example: Te Whatu Ora Waitematā's approach


Te Whatu Ora Waitematā has taken a leadership role in this space by creating a dedicated AI Governance Group. This group assesses proposed AI projects from concept to implementation, asking not just "Does the tool work?", but "Is it right for our context?"


Their framework is built around eight key domains, covering clinical, cultural, ethical, and legal perspectives. Crucially, it ensures Māori voices, consumer input, and clinical judgement are central, not sidelined.


This model is now helping other regions and providers think through their own governance approach.


The 8 domains of ethical AI governance


Here's a breakdown of the eight domains from the Waitematā framework:


  1. Appropriateness

    1. Is AI even the right tool for this?

    2. Does it clearly outperform existing processes?

  2. Consumer perspectives

    1. Will the public accept and trust it?

    2. Could it create confusion, fear, or unintended harm?

  3. Māori perspectives

    1. Does it honour Te Tiriti and uphold tikanga?

    2. Has Māori governance been involved in design and oversight?

    3. Does it follow Te Mana Raraunga (Māori data sovereignty)?

  4. Equity and fairness

    1. Will the tool reduce (or reinforce) inequities?

    2. Has it been tested locally to avoid bias?

  5. Ethical principles

    1. Are transparency, consent, and human oversight embedded?

    2. Can people understand how decisions are made?

  6. Clinical perspectives

    1. Will clinicians find it trustworthy and useful?

    2. Does it fit within real-world workflows?

  7. Data quality and use

    1. Is the data clean, secure, and ethically sourced?

    2. Have consent and secondary use been properly considered?

  8. Technical and legal fit

    1. Are safeguards, audit trails, and contracts in place?

    2. Who holds liability if something goes wrong?


How to put this into practice


You don't need to build a massive governance team to get started. Here's how any health provider, large or small, can begin:


  • Form a small governance group: Include representation across clinical, Māori, digital, legal, and consumer perspectives.

  • Use a checklist or framework: Tools like Waitematā's model can guide reviews at key stages of each AI project.

  • Engage early with Māori and affected communities: Partnership is not an afterthought. It's part of designing safe, acceptable systems.

  • Set expectations with vendors: Make it clear that you need transparency - how the tool works, what data it uses, what risks are known, and how it's being monitored.

  • Plan for ongoing oversight: AI doesn't stand still. Your governance shouldn't either. Build in checkpoints and reviews beyond go-live.


Building trust takes intention


Governance isn't about ticking boxes. It's about building trustworthy systems that reflect Aotearoa's values and priorities.


Done well, ethical governance helps ensure AI:

  • Works for the people it serves

  • Respects Te Tiriti and cultural safety

  • Delivers benefits without creating new risks or inequities


The future of AI in healthcare is exciting, but it needs to be just, safe, and inclusive if it's going to last.


Want to see more local examples?


Download our free guide: "AI in Healthcare - A Practical Guide to AI Adoption for Health Leaders in New Zealand"


Inside, you'll find:

  • Practical use cases from across the region

  • Guidance for choosing and governing AI tools

  • A foundation for getting started safely and ethically

Download your copy now and explore how you can lead AI in healthcare with confidence.




Sign up for our monthly Digital Digest

Get industry updates, tech news, and CIO Studio blogs free to your inbox!

bottom of page