Delivering software has never been just about writing code. Even with AI, customers rarely provide requirements that are complete or detailed enough to realize the intended business outcome. So we continue to sit down with customers, clarify the real business problem, define the right goals, and make ethical and pragmatic decisions together.
AI can accelerate parts of the process, but it cannot replace the critical thinking needed to align technology with business value. Without that human filter, teams risk over‑engineering or building the wrong thing, faster.
And there’s a second dimension: people throughout an organization must adopt and adapt to new solutions. The faster AI allows us to deliver, the more frequently customers will need to introduce changes to their user base. That requires thoughtful change management, guided by people. Because AI can create a solution, but it takes people to make change stick.
AI is transforming the work of everyone involved in software delivery, especially developers.
Developers will spend far less time writing hand‑crafted code, and far more time guiding AI:
As a result, their work naturally shifts toward conceptual and analytical thinking. They create the technical context in which AI can thrive. And as they become more skilled at leveraging AI, they can spend more time on system design, problem‑solving, and ensuring that what we build truly drives business value.
To support this evolution, our internal Software Factory governance model is designed to guide, train, and empower our development teams, while ensuring job satisfaction remains high as their roles evolve.
Software architects, too, face a significant shift. They must design for AI infusion across business applications, understanding what capabilities AI unlocks (from intelligent assistance to automation to adaptive user experiences) and how those capabilities improve productivity and outcomes. At the same time, they must anticipate risks and limitations tied to AI’s non‑deterministic behaviour. Their job is to create architectures that strike the right balance between innovation and control, including guardrails, observability, and fallback mechanisms. AI should be trustworthy, valuable, and clearly aligned with real user needs, and architects play a defining role in making that happen.
Analysts and project managers also benefit from AI‑augmented tools and AI agents. Tasks like writing user stories, managing backlogs, or preparing reports can be accelerated significantly, but only if they learn how to collaborate effectively with these AI assistants. With the operational load reduced, they can shift toward designing applications that drive real business outcomes and resolving impediments outside the team’s direct control.
For project managers, the challenge will be ensuring that all roles keep pace. As AI boosts developer productivity, the rest of the team, including business stakeholders, must be ready to operate at this accelerated rhythm. This makes insights into people, processes, dependencies, and technology more important than ever when composing balanced teams.
As developers become better at using AI tools, more time can be spent on problem solving and system design.
As AI becomes deeply embedded in software delivery, governance only grows more important.
For instance, when an AI tool generates code containing a security vulnerability or a logical flaw, responsibility becomes a shared and sometimes unclear question. Regardless of where accountability ultimately lies, human oversight is essential. Human intelligence is the final safeguard.
When AI is infused into business applications, organizations must account for non‑deterministic behaviour. A process that becomes even more complex when model versions evolve. A workflow may behave differently under an updated model, even when the business context remains the same. Governance frameworks need built‑in checks and balances to ensure reliability, safety, and transparency.
Regulatory frameworks such as the EU AI Act already require organizations to demonstrate that their AI systems, and the software they generate, comply with strict risk and transparency obligations.
At Cegeka Application Services, we apply what we call augmented governance.
That means:
“Augmented” is deliberate: AI strengthens our governance, it doesn’t replace it.
Balancing innovation and control
A strong governance framework should not slow innovation down. It should accelerate it. AI evolves too rapidly for us to impose rigid rules around which tools, models, or techniques teams are allowed to use. That would only create bottlenecks.
Instead, we formalized a boundary framework that defines the context in which experimentation can safely happen. Teams can assess themselves against this framework to ensure any solution stays within legal, ESG, and organizational boundaries.
With this approach, we enable teams to move fast in the AI domain, safely, responsibly, and without unnecessary friction.
Only human intelligence can ensure AI is used in a valuable, safe, and legally compliant way.