“To know the road ahead, ask those coming back.” Chinese proverb
In the previous article, we argued that many of the so-called “breakthrough” concepts in modern AI such as agentic workflows and contextual protocols are, in fact, echoes of established software engineering principles. MCP (Modern Context Protocol) looks suspiciously like DAL (Data Access Layer). Agentic AI behaves a lot like distributed function orchestration.
But that recognition isn’t a dismissal of innovation, it’s a strategy. If we can identify which software engineering patterns are being rediscovered, we can also predict which ones are still missing. This insight is a goldmine for thoughtful investors and builders alike.
The similarity between MCP (Modern Context Protocol) and the classic Data Access Layer (DAL) is far more than skin-deep. DALs were designed to abstract away the complexity of data sources, offering a clean, consistent interface to the rest of the application. MCP performs a remarkably similar role — only now, it enables LLMs and AI agents to access structured data through standardized APIs. It’s the same foundational principle, just adapted to a new paradigm.
Think back to the early ’90s, when Microsoft introduced ODBC (Open Database Connectivity), a breakthrough that let developers interact with various databases through a common interface. MCP is essentially the ODBC of the AI era.
Agentic AI — systems built from prompts, memory, and tool invocation — also echoes an earlier architectural shift: service-oriented architectures (SOA) and, later, micro services. Both distribute logic across loosely coupled components with defined roles and communication protocols. Yesterday we orchestrated services using JSON and REST; today, we do it with prompts and context windows.
These aren’t entirely new inventions — they’re rebrandings of proven patterns. And recognizing that gives us a strategic advantage: it lets us look ahead by looking behind. If we can identify what’s already been rediscovered, we can start to anticipate what foundational elements are still missing — and where the next big opportunities lie.
If DAL became MCP and service orchestration became Agentic AI, what are the other engineering constructs that haven’t been reinvented yet in the AI-native world?
Here are some examples of missing puzzle pieces in engineering fundamentals that haven’t yet been fully realized in LLM-based systems, and where tomorrow’s infrastructure winners might be hiding.
In classical software systems, we can’t do much without authenticated users and persistent sessions. But most LLM-based agents today are effectively stateless. They operate without identities, roles, or histories unless explicitly re-fed that data every time.
To empower agents to collaborate, personalize, and evolve over time, we need native session management. Imagine:
This will be essential in any production-grade AI environment from enterprise tools to multi-agent ecosystems.
We use Git to track code changes, manage dependencies, and collaborate. But how do you manage:
Today, it’s all ad hoc. But soon we’ll need “Git for AI״, a system that tracks prompt evolution, version-lock agent capabilities, and allows rollback and testing.
Traditional software has logging, tracing, and dashboards. But with LLMs, failures are often semantic or contextual:
The need for LLM observability is growing. Imagine:
When services communicate, they rely on contracts (like Swagger/OpenAPI) that define what’s expected. Right now, prompt-based agents are often brittle and untyped.
We need something akin to:
This would unlock composability and enterprise-grade trust.
For investors looking to deploy capital in the AI infrastructure space, this framework offers a guiding map.Wherever an old software engineering principle hasn’t yet been adapted to the AI paradigm that’s where opportunity lies.
Let’s name a few categories likely to mature into investable verticals:
These are not just engineering problems, but also valid ideas for new start-ups in the field.
If you’re a founder, the best ideas may not come from asking “What can AI do that’s new?” but instead: “What proven practice from engineering hasn’t yet made it into AI?”
These answers already exist in a different way. The AI world just hasn’t fully caught up yet.
The hype cycle around AI will continue to push “magical” demos, one-off breakthroughs, and zero-to-one illusions. But beneath the surface, a new software stack is being born, and it’s one that still follows timeless engineering patterns.
Modern AI systems need the same things we’ve always needed:
The difference? Now we’re building it for agents, not users. For prompts, not code. But the discipline is still software engineering.
Those who understand and act on it will define the next generation of AI infrastructure.