The FM sector is full of companies planning their AI strategy by building business cases and conducting feasibility studies. Nigel Potter, Director of Strategy at Churchill Group on taking a more practical route
At Churchill Group, we’re deploying AI tools across three areas: voice-driven site intelligence, internal data analysis, and everyday productivity support. But the real lessons we’ve learned aren’t about the technology. They’re about closing that gap between planning and doing.
Before adopting AI, our colleagues were spending 30 minutes writing up a 20-minute site walk. Our frontline staff who speak English as a second language struggled to report issues because the forms were in English. Our account managers were spending hours digging through data instead of talking to clients. The gap between strategy and frustration felt enormous.
The worst question you can ask when exploring AI is “what could AI do for us?” You’ll get a hundred theoretical answers and no clarity. Better questions are “what’s the most frustrating part of your team’s week?” and “where do people spend time on work that feels pointless?”
This is why we didn’t start by looking at AI tools but began by mapping these frustrations. Only then did we look for technology that could solve them.
That led us to develop SiteWalk with our partner Symantiq – a voice-driven reporting tool that lets managers speak their observations while walking a site. The AI transcribes, translates (currently 57 languages) and structures their notes into reports in real time. Computer vision analyses photos to identify issues automatically.
During a six-month pilot at M&G, 36 site walks produced more than 1,800 structured findings. At a national gym chain, we logged 640 issues across three sites in seven months, with 70 per cent raised by voice. Crucially, we captured 12 near-misses with full evidence – incidents that likely would have gone unreported under the old system.
The technology is clever, but the win was matching the tool to an actual, felt frustration.
BUILD TRUST BEFORE YOU DEPLOY TOOLS
Initially, we thought if we built good AI tools and showed people the results, adoption would follow naturally. However, after attending demonstrations, people went back to doing things the old way. Or worse, they used the tools secretly, creating a culture of guilt about AI use.
We’ve learned you need to tackle the cultural piece first. That meant running a ‘promptathon’ with our technology partner Nasstar – essentially an internal training day to demystify AI and let people experiment in a safe environment.
We’re also developing clear guardrails. Not restrictive policies, but practical guidance on when AI is appropriate and when it isn’t. The goal is to remove the anxiety about whether you “should” be using it and create a culture of transparency.
MEASURE THE RIGHT THINGS
We initially focused on efficiency metrics such as reports generated, time saved, and issues logged. These matter, but they don’t capture the real value.
What we’re now tracking:
- Quality of client conversations – An internal AI tool, M2, acts as a data analyst for account managers and operational directors. It crawls through operational data to answer questions in moments rather than hours. The goal isn’t just time saved – it’s enabling richer conversations because people have better insights.
- Who’s using it – Not just adoption rates, but whether use is distributed across teams or concentrated with early adopters. If only the tech-savvy people are using it, you’ve built a tool for 20 per cent of your workforce.
- What’s not being reported – The voice-first design of SiteWalk meant previously unheard staff could report issues in their own language. We’re tracking whether we’re getting reports from people who didn’t report before, not just more reports from the same people.
ADVICE ON STARTING THIS JOURNEY
First, don’t wait for perfect clarity. We’ve got M2 at proof of value stage right now – it’s running on real data, but we’re still tightening the guardrails and refining what it can do. You learn by doing, not by planning.
Second, expect resistance that isn’t about the technology. When people push back on AI tools, they’re often really expressing anxiety about their role, their skills, or being replaced. Address that directly.
Third, pick one small, annoying problem and solve it well. Don’t try to transform everything at once. We started with site reporting because it was universally frustrating and the win would be obvious.
Finally, be honest about what you’re doing. We’re not implementing AI to reduce headcount. We’re doing it so finance teams don’t spend hours trawling through data, so FMs can spend less time writing and more time observing, so multilingual staff can communicate in their own words.
If you can’t articulate a benefit that makes someone’s working life better, you’re building the wrong thing.
THE REAL BARRIER ISN’T TECHNOLOGY
AI technology is advancing rapidly, but that’s not what will determine whether it works in FM. What matters is whether we can build environments where people feel trusted to use it, where the tools solve real frustrations, and where the wins are tangible and human.
As for guilt and secrecy about using these tools? That’s solvable, but it requires being as thoughtful about culture as you are about code.

