I was asked yesterday for my thoughts on the future of software development. I cannot predict the future, and I scraped through my meteorology degree, so please take all of this with a giant pinch of salt.
But I do think you can extrapolate from lived experience. Not in a "here is the one true trajectory" way, more in a "the slope is obvious even if the exact curve is not" kind of way.
I have three clear experience points in my head from the last year.
January 2025: AI code completion was barely usable. It was good at repetitive completion of lines of code, and occasionally it would save you a few minutes, but it rarely changed the shape of your work. You were still writing almost everything. You were still thinking at the level of syntax and implementation most of the day. The AI felt like a smart autocomplete, not a collaborator.
August 2025: It got better. Now the AI could work by itself for a few minutes. It could produce whole functions, and sometimes it could even connect a few ideas together. But you still had to do a lot of the cognitive heavy lifting. The main skill was chunking: breaking a task into the right sized pieces so the AI could succeed, and then doing the to-and-fro to correct course. It was helpful, but it still felt like you were "driving".
December 2025: Substantial improvement. The AI could work on its own for tens of minutes. You could give it whole outcomes to create. Frequently there was no need to amend much. The interaction moved up a level. Less "write this function" and more "deliver this outcome, here are the constraints, test it, document it, and show me your assumptions".
From those three points, you can imagine three possible scenarios:
- This is exponential and keeps accelerating.
- This is linear until it hits a ceiling.
- We have already hit the ceiling.
Personally, I think reality will be a mix of the first two for a while, and then a ceiling. But here is the key point: I do not think it matters anymore.
Because even if we got no further improvement from AI at all, software development is already on a different track. The tools have crossed a threshold where the economics and the process change. That means the job changes. It means the organisation changes. And it means what gets built changes.
Three things that will change this year, even with zero more model improvement
1. More software, not fewer developers
The popular belief is "AI means we need fewer developers". I think the opposite is more likely, lower cost means more software. When the cost of creation drops, demand expands. Entire categories of work that were not worth building before become viable. We will build more because we can justify more. The constraint shifts from "can we afford to build it?" to "should we build it, and what will we learn?" That still needs developers. It just needs them in a different posture.
2. Low code is slow now
For a long time, "low code" had a fairly clean pitch. Faster to build, easier to maintain, decent enough UX, and you do not need scarce engineering talent for everything. That balance is flipping. When real code can be generated quickly, iterated quickly, and explained quickly, then low code starts to feel… slow. Slow to express nuance. Slow to extend. Slow to debug in the weird edge cases. Slow to produce great user experience. And often hard to support because the complexity is hidden inside a platform you do not fully control.
I am not saying low code disappears. It will remain strong in certain contexts, workflows, simple CRUD, teams without engineering capacity, and situations where the platform constraints are actually a feature. Low code may get many AI advances this year, I am looking forward to seeing that.
3. The role and process of development will change
This is the biggest one.
The SDLC most organisations run today was designed to mitigate risk from human developers. It assumes implementation is expensive in time and attention, so you spend a lot of process effort preventing waste. Long specification phases, gated approvals, handoffs, and cautious sequencing.
AI inverts that. Implementation gets cheap. Iteration gets cheap. Prototypes get cheap. The risk shifts from "we spent months building the wrong thing" to "we generated a lot of output quickly and now we do not know what is true, safe, correct, or maintainable". So the SDLC has to be redesigned for the risks of AI development.
And developers' roles change too. They have to become leaders faster, focusing on outcome, not output. Less "how do I implement this?" and more "what should exist, what constraints matter, what is the evidence it works, what are the failure modes, and how will we operate it?" This creates an opportunity to get closer to the customer, finally having the time to understand their needs deeply while still shipping.
What should low code companies do this year?
This is not a doom scenario. Low code is not disappearing. But the ground is moving. If real code becomes dramatically faster and cheaper to produce, the value proposition of "faster than engineers" weakens.
AI-coded apps still need somewhere to live. They need hosting, identity, data access, permissions, monitoring, upgrades, audit trails, and governance. Code generation solves creation. It does not solve operation.
1. Add real code and code assistants into the platform
Low code platforms historically abstracted away code. That abstraction was the value. But if writing high-quality code becomes fast and cheap with AI, abstraction alone is no longer enough.
The smart move is to integrate:
- Embedded code assistants
- First-class support for real code alongside visual builders
- AI agents that can extend, refactor and document user projects
- Clean export to maintainable repositories
- A stable runtime where AI-generated apps can safely live
In other words, do not fight real code. Host it.
2. Shut off access and protect the data moat
There is another path. It is less open, but historically effective. Enterprise software companies can build defensibility not just on tooling, but on data centrality. Once your system becomes the system of record, integration friction becomes your moat.
Low code platforms inside enterprises could lean into this:
- Deep vertical specialisation
- Strong compliance and governance layers
- Identity, audit and permission models enterprises trust
- Native integration with critical systems of record
- Restricted data access that forces AI-generated apps to "live" inside the platform
It is defensible. But it risks slowing innovation if protection becomes the goal rather than enablement.
3. Become the structured data layer for AI ecosystems
Focus on becoming the trusted, structured, permissioned data source that AI systems depend on.
That could mean:
- Clean, well-documented domain schemas
- Providing secure APIs that expose high-quality, structured data
- Creating robust permission and policy models for machine access, with neat ways of applying user permissions for agents
- Supporting machine-readable audit logs and provenance
- Acting as a governed data exchange between enterprise systems and AI tools
In this model, value does not sit in the UI builder. It sits in data integrity, schema design, access control and reliability.
None of these paths are mutually exclusive. A platform can integrate AI coding deeply, protect enterprise data layers, and still embrace open orchestration. But doing nothing is not a strategy.
If the cost of real code continues to collapse, the question is no longer "how do we remove complexity?" It becomes "where does execution, governance, and long-term ownership live?"
What should low code developers do to prepare?
This is the part where I want to be very careful. Low code developers are not "lesser developers". They are often closer to the business, faster at delivery, and more pragmatic than many traditional teams. That position actually becomes more valuable in an AI-heavy world, not less.
But some adjustments will help.
1. Learn to code (yes, really)
This is probably the only group of developers I would say this to so directly. Not because low code is bad, but because the boundary between low code and real code is dissolving. AI is erasing much of the mechanical difficulty that made traditional coding inaccessible in the first place.
You do not need to become a compiler expert or a systems engineer. But you do need to be comfortable reading code, reasoning about it, and shaping it at a high level.
AI-generated apps will increasingly mix visual workflows, generated code, and handwritten code. The people who can move confidently across that boundary will have leverage. The people who cannot may find themselves constrained by the platform's edges.
Learning to code in the AI-age is about understanding what is possible, what is safe, and what is fragile.
2. Loosen platform loyalty and diversify
Low code developers are notoriously loyal to their platforms. That loyalty has made sense. You invest years learning a tool, building patterns, earning trust inside an organisation, and becoming the go-to person.
But now is probably the time to be a bit more flexible.
Some low code platforms will adapt extremely well to AI. Others will struggle. Some will open up. Others will double down on abstraction and control. The risk is not being loyal. The risk is being trapped.
Diversifying does not mean abandoning your platform. It means:
- Understanding how the same problem is solved elsewhere
- Learning where your platform integrates well and where it does not
- Being able to explain trade-offs, not just features
- Being ready if your organisation's strategy changes
3. Shift left toward the customer
AI-coded apps still need deep understanding of customer context. In fact, they need it more than ever. AI is very good at producing something. It is much worse at producing the right thing without guidance.
Low code developers are already closer to the customer than most engineers. They understand workflows, exceptions, constraints, and the political reality of how organisations actually operate.
Low code developers are in a uniquely strong position to do this translation. This shift does not reduce the importance of low code developers. It changes where their leverage comes from. Less time spent fighting tools. More time spent shaping outcomes. More proximity to customers. More influence over what actually gets built.
The thread tying this together
Even if the curve flattened tomorrow, software development will look very different by the end of this year.
The ceiling does not matter because the threshold has already been crossed. The economics have changed. The process has to change. The roles have to change. And the platforms that serve developers, whether low code or traditional, have to change too.
The question is not whether this shift is coming. It is whether you are designing for it on purpose.

