AI isn't a strategy. Here's what should come first.
- Ray Delany

- Aug 11
- 3 min read
Updated: Aug 26

The temptation of tech-first thinking
There’s no denying it - AI is exciting. Every week brings a new vendor demo, a breakthrough story, a shiny solution. And for busy healthcare leaders, it’s tempting to leap toward the tool that promises quick wins.
But here’s the problem: when technology comes before strategy, progress stalls.
In developing a strategy for a client last year, I was asked “where is the AI implementation project”. My answer was “it’s in every project shown here”.
AI is a general-purpose technology, which means the first thing you have to do is decide what problem you are trying to solve. You need to be specific, rather than “make things better” or other general statements.
Why AI-first thinking falls flat
When the tech leads, these things often follow:
Solutions in search of a problem
Think chatbots with no patients using them, or tools no one trusts
Disconnection from operational context
Overlooking scheduling constraints, staffing gaps, or compliance needs
Frustration from the frontline
When clinicians are handed tech they didn’t ask for, or don’t understand
Overlooking governance and data readiness
Leading to poor quality results or safety risks
As our eBook points out, healthcare leaders who succeed with AI don’t start with the tool, they start with the foundations.
Five foundations that come before the tool
Before exploring platforms or pilots, anchor your AI journey in these strategic foundations:
Define the problem you’re solving
Is your goal to reduce wait time? Free up clinician time? Improve follow-up care? Get specific. A clear problem sets the path.
Assess your data readiness
Do you have accurate, digitised patient records? Are your systems integrated? Do you understand where the gaps are?
Engage your people early
AI works best when clinicians, administrators, and even patients are part of the process. Early involvement builds trust and surfaces crucial insights.
Establish your governance guardrails
Who reviews AI decisions? How are risks flagged? What safeguards are in place to address bias and explainability?
Define what success looks like
Is it a reduction in call volumes? More accurate triage? Shorter wait times? Choose outcome-based metrics that matter to your teams.
These questions come straight from real-world implementations in New Zealand and Australia, and they’re what separates hype from value.
AI is a toolset - not a destination
There’s no such thing as an “AI transformation.” Instead, AI should quietly integrate into your existing strategy, layered into the workflows and decisions where it makes the most difference.
The best AI wins come from small, high-friction areas:
Automating clinical notes
Matching staff to demand
Triaging patient symptoms overnight
As our eBook says: “Don’t start with the tech specs. Ask: what will this change for our staff or our patients?”
What health leaders can do next
You don’t need to be an AI expert to take the next step. Just start with clarity and curiosity:
Run a simple AI readiness check
Identify one process that causes daily friction
Host a leadership conversation using a shared framework
Download the eBook for real-world examples and planning support
Ready to lead with clarity?
Get your free copy of “A Practical Guide to AI Adoption for Health Leaders in New Zealand”
This guide includes:
Use cases already working in New Zealand and Australia
Practical advice from early adopters
Key governance conversations every leader should have





Comments