The Transmission Problem
LLMs removed the ceiling on code production. The principled stack is what keeps the output usable.
The last quarter-century of software ran on the friction between intent and execution. The bottleneck was translating high-quality requirements into stable, audited systems. Teams that invested in rigorous process moved faster because they had the confidence to deploy.
That equilibrium relied on a human ceiling for code production. With LLMs, that ceiling is gone. We are now flooding our systems with an unprecedented volume of intent, and if the requirements-to-production pipeline isn't automated through principled engineering, the entire machine will seize.
Bolt a high-output engine (LLM) onto a fragile chassis and you don't get a faster car. You get a mechanical failure. The cost is compounding faster than most teams have modeled.
The transmission in a software stack is the "Reconciliation Loop" (the mechanism that ensures what we want to happen actually matches what is happening). Without this loop, LLM output is a series of disconnected, imperative commands.
The shift from ClickOps to Infrastructure as Code makes this concrete.
The Declarative Guardrail
The move from ClickOps to Infrastructure as Code was already overdue. Teams avoided Terraform or OpenTofu because the expertise gap made clicking around a cloud console feel easier. It produced fragile, snowflake environments that couldn't be audited or reproduced.
That excuse is gone. LLMs reason well over declarative schemas. Point one at an imperative environment (raw AWS CLI, console state) and you get non-deterministic chaos. Point one at a Terraform definition and you get two engineering anchors: the LLM works within a structured, symbolic representation of real infrastructure, and running a plan produces a deterministic list of changes a human reviews before anything touches production.
The Principled Container
The same logic applies to general software development. If an LLM is generating the logic, the stack needs a principled container to hold it. A high-fidelity LLM stack leans toward formal, functional programming with strict side-effect enforcement. Pure functions are mathematically provable and lack direct disk or network access. Generated code runs in an isolated sandbox where all state changes must be returned as data structures for human review. Robust type systems prevent the LLM from hallucinating an interface that doesn't exist.
The Shift in Expertise
The value moves from generating syntax to designing the principled systems that make that syntax useful. We've always built complex, interconnected systems with whatever tools were at hand. LLMs are the newest, highest-output tool in the shed.
The transmission needs to hold. I'm currently investigating sandboxing tools and experimenting with strict functional environments in this workflow. I'll document the experiments as I go, including exactly what breaks.
Tags: #SoftwareEngineering #TheTransmissionProblem #LLMs #InfrastructureAsCode #Terraform #FunctionalProgramming #DevOps #SiteReliabilityEngineering #GenerativeAI #SoftwareArchitecture