BackAll posts

Software Cognitive Terminal Velocity

What happens when the speed of AI agent development reaches the maximum limit of human understanding

Modern agentic software development has moved beyond automating the tedious; it now offers the ability to generate entire systems with minimal human guidance. This output is a raw function of three variables: agent count, compute allocation, and the team's tolerance for technical debt.

While these agents operate within our credentials and on our behalf, they function at a velocity and a complexity that humans can't keep up with. This creates a critical inflection point. In a corporate setting, incentives prioritize immediate velocity over long-term legibility, and the complexity of the codebase eventually surpasses a threshold where the logic is no longer understandable.

Once we cross this threshold, the agents are no longer just tools. They're the only systems capable of understanding the logic they created. At this moment, developers will find themselves in a tight loop: we must deploy more agents to decipher the output of the previous agents' code. There's a role reversal where humans are no longer architects, but cogs in a machine that produces its own demand.

This is an idea that was explored in Energy and Equity by Ivan Illich. He identifies a specific "watershed" inversion moment where a technology is optimized in a way that turns it into a "radical monopoly" which destroys the very thing it was supposed to help. In his example, too many high-speed cars actually destroy mobility by making it impossible to walk. In our world, too much agentic velocity destroys "traffic" in the codebase by making it impossible for a human to navigate the logic using our own minds.

(A slightly more fun exploration is in Manna by Marshall Brain. It's more on-the-nose regarding the human/AI relationship, but he wrote it long before LLMs were a thing.)

If this agentic inversion is going to take place (and I think it will) we'll likely encounter a moment when the maintenance of the system consumes all of the time of the people who were supposed to "save time" by using it.

Indicators of the inversion

  • A Review-to-Code Ratio Flip: Normally, code takes way less time to review than to write. In an inversion, you spend vastly more time acting as an auditor for an agent than you would have spent just writing the logic yourself.

  • Context Dependency: You now rely on agents to understand the system. You have no intuitive awareness of the "why" or "how" behind a feature.

  • Pipeline Bottlenecks: Your traditional SDLC is overloaded. The agents produce so much output that your CI/CD and testing suites require a dramatic increase in maintenance and capital.

  • Standardization Monopoly: To keep the agents "efficient," workflows and practices are forced across entire teams. This kills individual expression; the human must now conform to an agent's requirements.

From the outside, companies hitting this terminal velocity probably look like they're cranking out features. Inside, they're probably spending every hour just keeping the agents and their agent's tooling alive. Honestly I think this is somewhat inevitable in most settings. The argument for holding back on velocity so that people can understand things or save time is not exactly "economically rational" and the current market conditions do not favour labour. That being said, I do think there's a massive opportunity to build better tools, and even re-think the SDLC with agents in mind. The existing SDLC is largely about the cost of the engineering time + the cost of coordination. With those costs basically falling through the floor on a per-unit-basis, there's a pretty interesting opportunity in that space.