From Models to Systems
The Primitives and Alignment Strategies Defining this Shift
We’ve now entered the systems phase of AI. In this phase, outcomes are driven less by model size and more by how effectively models are integrated into context, tools, and feedback loops.
A small set of primitives is defining this shift towards systems. These primitives are the foundational building blocks that determine how AI systems operate, learn, and create value in production:
Tool use turns models from conversationalists into operators. Systems now invoke calculators, databases, ERPs, underwriting engines, and code execution as first-class actions. The result is precision and accountability: a model doesn’t just suggest a next step; it fetches the record, updates the claim, drafts the order, and logs what happened. For buyers, this moves AI from “assistive” to “closed-loop,” where the unit of value is a completed workflow.
Dynamic context is the new data layer. Protocols like Model Context Protocols (MCP) allow AI systems to dynamically access the right context (e.g. documents, APIs, tools, or user state) at runtime. This shifts systems away from static, preloaded information toward context that has deeper personalization and situational awareness. As a result, context becomes more portable and composable across models and tools, and the strategic moat moves from owning a dataset to owning the mapping between data, policy, and decision.
Intelligent routers decide which model (or sequence of models) handles each task based on complexity, latency, or cost. Systems dynamically allocate compute to the best fit for the moment. Over time, these routers learn patterns: when to call a reasoning model, when to shortcut to a smaller one, when to chain specialized tools. The result is higher efficiency and consistent quality under real-world constraints.
Telemetry captures how the system behaves in production: which routes were chosen, how long they took, what outcomes they produced, and how users responded. It’s this observability layer that turns black-box inference into a data-driven optimization loop. With the right metrics, systems can tune routing, fine-tuning, and tool use automatically thereby learning to improve.
Certain alignment strategies are building on top of these primitives. Synthetic data, verifiable rewards, and multi-turn objectives are redefining how systems learn from their own outputs. Instead of relying on static human labels, models now generate training signals through AI judges, validators, and simulators. This lowers the cost of personalization and enables systems to align directly with real-world performance metrics. In physical domains like robotics, chemistry, and manufacturing, that same principle extends to real-world grounding, where feedback comes from sensors and field data.
Taken together, these primitives and alignment strategies are pushing the frontier toward systems composed of smaller, cheaper, specialized models coordinated through orchestration layers.
This shift changes both the value proposition and the cost structure of AI applications. For applications that offset or replace labor costs, we’ve seen experimentation with charging hourly or monthly rates that mirror human salaries. This approach can work in low-competition markets, but as capabilities commoditize and more systems enter the same workflows, it’s unlikely to remain sustainable.
Longer term, differentiation moves toward value-aligned pricing in workflows where AI is deeply embedded, domain-specific, and operationally accountable. In these situations, buyers will pay for measurable outcomes such as a resolved support ticket, a closed claim, or a completed legal document. In domains with high variance or financial impact, systems may even take on risk directly, charging a percentage of realized savings or revenue.
As these systems mature, differentiation will move up the stack to things like how context is composed, how decisions are routed, and how failures are detected. Tight coupling between domain workflows and outcomes is needed to build defensible moats. This is also one reason why I believe verticalization matters: the closer a system is to the ground truth of a domain, the easier it is to align behavior with value.
For builders, this means thinking more about control planes that govern routing, context, and accountability. For buyers, it means evaluating AI as an operator, one that can be audited, improved, and ultimately trusted with responsibility. And for the market as a whole, it means a redefinition of ROI.

