Generative AI, in the form of large language models (LLMs), is often hailed as an accelerator of software delivery. It’s especially useful for swiftly generating boilerplate code and simple, isolated functions. It’s also surprisingly good at code reviews: checking whether new code adheres to a project’s coding style and recommending changes to resolve identified issues. To a certain degree, LLMs can even debug code, interpret error messages, and suggest viable solutions. Moreover, AI assists developers navigating legacy code bases by providing valuable insights and documentation, enabling a head start. By leveraging LLMs for these tasks, software developers can improve their efficiency significantly.
However, when it comes to some other tasks, AI’s capabilities are frequently overestimated. AI can generate small, isolated features well, but using it for enterprise-grade software without strong governance and architectural oversight can quickly introduce structural risks. Without the human experience necessary for building extensive, complex applications, you risk creating an unstructured and difficult-to-maintain codebase. LLMs also require substantial context to deliver qualitative code, meaning that while you might spend less time on pure coding tasks, you will need more time to articulate functional and non-functional requirements. A human developer requires this context too, but an LLM can veer off track much faster without it.
Change Management
Without giving an LLM the proper project context and clear guidelines, you risk creating additional waste, resulting in inconsistent implementations, technical debt, unnecessary functionality, and redundant code paths. With accurate instructions, LLMs can yield good results. However, it’s a fine line between delivering enough context and falling short, with dramatic differences in results. This is a craft that requires experience, and a strategic change program for training individuals.
“Rushing into AI without proper training and follow‑up is one of the biggest mistakes organizations make. Change management is essential to cut through the hype.”
Guardrails
We also need to acknowledge that this domain is continuously evolving, with much still unexplored. Best practices, tools, and techniques are changing rapidly. This is why guardrails, in the form of hard technical constraints as well as structured processes, are essential.
This insight increasingly shapes how we evolve our long-standing Software Factory. We are moving toward a model where teams can spin up a development pipeline for a project with minimal friction, enriched with AI-supported guardrails. Ultimately, our customers benefit from this industrialized, standardized, and AI-supported way of working for software delivery.
We’re developing various AI agents as standard components of this “factory,” while our teams can create specific agents tailored to their projects. For example, it’s possible to create an agent that keeps architecture diagrams synchronized with code, one ensuring developers use the correct branching strategy in their code repositories, another verifying the adherence to semantic versioning practices, an agent proposing additional tests to improve code coverage, and an agent that automatically patches security vulnerabilities.
Some guardrails are even implemented as AI agents, leading to a scenario where AI scrutinizes AI. Essentially, our Cegeka teams gain these AI agents as extra team members specialized in particular tasks. Some agents provide suggestions, some create pull requests reviewed by humans, and some may intervene autonomously to address security vulnerabilities.
“AI agents are on track to evolve into specialized digital assistants. Some already suggest improvements or create pull requests, while others have the potential to act autonomously within well-defined boundaries.”
Many Changes in The Coming Years
As AI systems boost productivity, everyone’s expectations will shift accordingly. What was good enough in the past at some point may no longer meet future standards. Customers will expect more complete, more polished software, as well as faster implementation timelines. The arrival of AI could compel organizations to modernize legacy applications because the tipping point of a positive ROI is reached sooner, resulting in heightened demand for custom development.
We expect the volume of automatically generated code to grow as AI tooling matures. While AI won’t replace developers, it will fundamentally reshape their role. Writing code becomes less central, while orchestrating AI agents takes precedence. Developers will spend more time prompting AI tools effectively, debugging AI-generated output, and reviewing the behavior of AI agents. As AI introduces new types of errors and inconsistencies, human expertise rains essential to safeguard a solid architecture, security, and maintainability. Developers act as the final technical authority, ensuring that speed gains do not come at the expense of software quality.
“As AI boosts productivity, expectations naturally rise. What was ‘good enough’ yesterday will no longer meet tomorrow’s standards.”