Before your association invests in AI, ask this first
Over the past year, I’ve spoken to a growing number of association leaders who are under pressure to “do something with AI”. Boards are asking for AI strategies. Teams are experimenting with chatbots. Members are beginning to expect faster answers and more personalised experiences. That pressure is understandable. But it also carries a risk. Too often, organisations start with a solution, a chatbot, an agent, a platform, before they are clear on the problem they are trying to solve.
We’ve been here before. Many years ago, organisations decided they “needed an app”, without being able to articulate what the app was for or how success would be measured. AI risks becoming the next version of that pattern. The issue isn’t AI itself. The issue is where you start.
When technology becomes the goal
In my conversations with associations and not-for-profits, I regularly see AI initiatives framed around tools rather than outcomes. The thinking goes something like: we need an AI assistant, or we should be using agents, or the board wants an AI strategy.
The result is often a pilot that looks promising on the surface but struggles to progress. It doesn’t quite fit into everyday work. It raises expectations that are difficult to meet. And when confidence drops, the organisation quietly steps back. This isn’t because AI doesn’t work. It’s because it has been applied without sufficient attention to context, workflows and risk.
Start with real friction, not ambition
Associations are unusual organisations. Many are small teams serving tens of thousands of members. They hold vast amounts of data, documents and institutional knowledge. And they are often trying to do a great deal with limited resources. That creates very specific kinds of friction:
- Staff answering the same member's questions repeatedly
- Decisions slowed by the need to pull information from multiple systems
- Insight buried in journals, reports or archived documents
- Processes that are labour-intensive simply because there has never been the
- Capacity to automate them
These are not glamorous problems. I often describe them as valuable, boring problems. And they are exactly where AI can deliver meaningful value. If a problem is boring, it is often lower risk. If it is valuable, solving it builds confidence quickly. That combination matters, particularly in associations where trust from staff, members and boards is essential.
Think in workflows, not tools
One of the most practical ways I’ve found to think about AI is through workflows, or what I sometimes call “recipes”.
Every association already has recipes: membership applications, renewals, abstract submissions, grant applications, enquiries, and reporting cycles. These processes have inputs, steps and outcomes, even if some of those steps are manual today.
AI fits into these recipes in different ways. Sometimes it helps prepare ingredients - for example, transcribing recordings or making sense of unstructured documents. Sometimes it supports analysis. Sometimes it helps surface answers that already exist but are simply hard to find. What matters is using the right technique at the right stage, rather than trying to apply one model or one tool to everything. AI is not one thing, and it should not be treated as one.
Data readiness: you’re probably closer than you think
One of the most common reasons associations hesitate is the belief that they “aren’t ready for AI”. Their data may be fragmented. Some of it may live in documents. Some of it may even still be on paper. In practice, this is normal — and it does not automatically prevent progress.
Readiness is less about having perfect data and more about understanding what you have, how it is used, and how it needs to be prepared for specific workflows. Often, the work lies in pre-processing and orchestration rather than wholesale transformation. Many associations are far closer to being ready than they realise, provided they approach AI in a structured and pragmatic way.
Treat AI like a colleague, not a black box
As organisations move beyond simple experiments, another shift becomes important. AI should be treated much like a colleague. If you hired someone new, you wouldn’t expect them to perform well without context. You would give them guidance, standard operating procedures and oversight. You would audit decisions that mattered. You would limit access appropriately.
The same principles apply to AI. Reliability does not come from bigger models or more features. It comes from clarity: clear instructions, clear boundaries, and an understanding of how decisions are reached. This approach also helps manage risk. Associations already understand governance, auditing and separation of responsibilities. Applying those familiar controls to AI is far more effective than treating it as a mysterious black box.
From pilot to production: where most efforts stall
Many organisations can demonstrate that an AI idea works in principle. The harder question is what happens next.
Scaling AI into everyday operations is fundamentally different from running a proof-of- concept. It introduces new considerations around testing, security, cost, reliability and governance. Traditional software testing assumptions do not always apply, and organisations need to think differently about how confidence is built over time.
This is often the point at which early enthusiasm fades, not because the idea was wrong, but because the transition from experiment to production was not fully thought through.
Continuing the conversation
I’ve spent many years working with associations and not-for-profits on digital platforms and, more recently, on applying AI in practical and responsible ways. In February, I’ll be exploring these ideas in more depth in a webinar for association leaders:
Fulfil your association’s potential: delivering transformation with AI
4 February 2026, 10.30 am
The session is designed for senior leaders and managers and does not require AI expertise. I’ll share real examples from associations already using AI to improve member self-service, unlock value from the data they already have, and support better decision-making without starting with tools or hype.