He’s referring to a recent MIT Media Lab report – widely covered in international media - claiming that only 5% of custom generative AI projects deliver measurable business value. The other 95% stall in endless experiments. “The media loves doom and gloom. What matters are the success stories, and they’re out there.”
Kate at KBC: A Belgian Success
One of the best known stories here in Belgium is Kate, KBC’s digital assistant. Since 2020, the bank has been investing heavily in the tool, which now lives inside the KBC Mobile app.
“I can’t share numbers, but I’m convinced it’s a huge success,” says Verlinden. “I even heard someone from NMBS say that when their app went down, they told customers to buy their ticket through Kate instead. That says it all about adoption.”
Kate has already handled millions of interactions and keeps expanding in scope. More importantly, it has become a cornerstone of KBC’s digital strategy. “Kate was the first banking tool in Belgium to radically rethink how clients and banks interact. It wasn’t just some digital icing on the cake: it was an entirely new cake.”
The Long Game Pays Off
Verlinden argues that projects like Kate succeed because they are built over time, not rushed. “If you expect transformation in six months, you’ll probably stall. Play the long game, and that’s where the breakthroughs live.”
"If you expect transformation in six months, you’ll probably stall."
At Cegeka, that philosophy drives projects such as Fluvius, where AI makes smart meters truly smart by predicting issues before they arise within a heavily regulated environment; Q-Park, where smart parking improves city livability; and D-Stress, which uses AI to spot early signs of burnout.
“These examples prove that incremental steps compound into real transformation,” Verlinden says. “It’s not about quick wins, but about building momentum year after year.”
So Why Do AI Projects Fail?
So why do AI projects miss the mark? Verlinden points to what he calls context determination. “It’s like scoping, but sharper. What’s the job to be done? What isn’t? What’s the dream scenario, the knock-out features, the nice-to-haves? Success depends more on what you leave out than what you put in. That’s why I call it peeling the onion. And I can assure you: these are tough concersations!”
“Success depends more on what you leave out than what you put in. That’s what I call peeling the onion.”
Too often, companies want to skip this. They throw everything at the model, hoping it will sort itself out. “That’s simply not true. Without focus, you get noise, not value.”
From Automation to Augmentation
The debate around AI often focuses on automation and job loss. Verlinden prefers another word: augmentation.
“Some jobs will cease to exist, sure. But that has always been the case throughout history. What matters is augmentation, putting people in a superstate, outsourcing routine tasks to AI while focusing on higher-value work. For example, AI now allows business analysts to generate SQL scripts and perform testing activities. That frees up expert testers to tackle the complex cases. That’s real value.”
“Some jobs will cease to exist, sure. But that has always been the case. What matters is augmentation, putting people in a superstate.”
Start with People, Not Use Cases
Unlike many, Verlinden doesn’t start projects by asking for use cases. “I start from people, usually knowledge workers. They have jobs, made up of tasks, made up of activities. That’s where you look: what can we automate, what can we augment? Only then do you build the roadmap.”
That roadmap, he says, is always incremental. “Maybe the goal is an exposure score of 50, but your first steps only get you to 5 or 6 percent. From there, you expand: AI agents, agentic architectures, and so on. That’s how headline-making tools are built: step by step. Look at Kate. This takes vision and persistence. Real results show up only after years. But they’re worth it.”
Beyond Doom and Gloom
The numbers may be sobering, but Verlinden sees them as a challenge, not discouragement. “AI projects are tough. They demand effort, discipline, courage, and time. But so did every transformative technology before them. If you start with people, define your context carefully, and stay the course, results will come. Doom scenarios make headlines. Success stories take longer to tell, but they’re the ones that matter.”
“If you start with people, not use cases, define your context carefully, and stay the course, results will come.”
But he also insists on balance. “In some parts of the world, AI is being rolled out with little regard for regulation, privacy, or ethics. The mindset there is: push productivity to the limit, because we can. This leads to an unbalanced approach to innovation.”
At Cegeka, responsibility comes first. “We ask: is this responsible? Is it compliant, not just legally, but with our values? And beyond efficiency, does it serve society in a meaningful way? Yes, we aim for maximum impact, but that impact must be responsible and sustainable.”
That means strong data governance, and careful oversight of models sourced from third parties as well as those built in-house. Verlinden even foresees new roles, such as ethical sourcing managers, to ensure every model meets Cegeka’s standards.