Ron Porat
|
June 6, 2025

No Cards, No Crystal Ball: Predicting AI Future by Rewinding Software’s Past

“To know the road ahead, ask those coming back.” Chinese proverb

In the previous article, we argued that many of the so-called “breakthrough” concepts in modern AI such as agentic workflows and contextual protocols are, in fact, echoes of established software engineering principles. MCP (Modern Context Protocol) looks suspiciously like DAL (Data Access Layer). Agentic AI behaves a lot like distributed function orchestration.

But that recognition isn’t a dismissal of innovation, it’s a strategy. If we can identify which software engineering patterns are being rediscovered, we can also predict which ones are still missing. This insight is a goldmine for thoughtful investors and builders alike.

Rediscovering the Familiar

The similarity between MCP (Modern Context Protocol) and the classic Data Access Layer (DAL) is far more than skin-deep. DALs were designed to abstract away the complexity of data sources, offering a clean, consistent interface to the rest of the application. MCP performs a remarkably similar role — only now, it enables LLMs and AI agents to access structured data through standardized APIs. It’s the same foundational principle, just adapted to a new paradigm.

Think back to the early ’90s, when Microsoft introduced ODBC (Open Database Connectivity), a breakthrough that let developers interact with various databases through a common interface. MCP is essentially the ODBC of the AI era.

Agentic AI — systems built from prompts, memory, and tool invocation — also echoes an earlier architectural shift: service-oriented architectures (SOA) and, later, micro services. Both distribute logic across loosely coupled components with defined roles and communication protocols. Yesterday we orchestrated services using JSON and REST; today, we do it with prompts and context windows.

These aren’t entirely new inventions — they’re rebrandings of proven patterns. And recognizing that gives us a strategic advantage: it lets us look ahead by looking behind. If we can identify what’s already been rediscovered, we can start to anticipate what foundational elements are still missing — and where the next big opportunities lie.

The Big Question: What Else Is Missing?

If DAL became MCP and service orchestration became Agentic AI, what are the other engineering constructs that haven’t been reinvented yet in the AI-native world?

Here are some examples of missing puzzle pieces in engineering fundamentals that haven’t yet been fully realized in LLM-based systems, and where tomorrow’s infrastructure winners might be hiding.

1. Session and Identity Management for Agents

In classical software systems, we can’t do much without authenticated users and persistent sessions. But most LLM-based agents today are effectively stateless. They operate without identities, roles, or histories unless explicitly re-fed that data every time.

To empower agents to collaborate, personalize, and evolve over time, we need native session management. Imagine:

  • Persistent agent memory per user
  • Multi-agent session handoffs
  • Role-based access controls

This will be essential in any production-grade AI environment from enterprise tools to multi-agent ecosystems.

2. Version Control for Prompts, Agents, and Tools

We use Git to track code changes, manage dependencies, and collaborate. But how do you manage:

  • Prompt iterations?
  • Toolchain configurations for an agent?
  • Multi-agent system orchestration plans?

Today, it’s all ad hoc. But soon we’ll need “Git for AI״, a system that tracks prompt evolution, version-lock agent capabilities, and allows rollback and testing.

3. Observability and Debugging in AI Systems

Traditional software has logging, tracing, and dashboards. But with LLMs, failures are often semantic or contextual:

  • Why did the agent make that decision?
  • What memory did it use?
  • Which tools failed silently?

The need for LLM observability is growing. Imagine:

  • Prompt flow visualizations
  • Execution traces across agents
  • Time-travel debugging for memory-based agents

4. Interfaces and Contracts for LLM Agents

When services communicate, they rely on contracts (like Swagger/OpenAPI) that define what’s expected. Right now, prompt-based agents are often brittle and untyped.

We need something akin to:

  • Agent Interface Descriptions (AID) defining what inputs an agent expects, and what outputs it guarantees
  • Failover patterns when a tool call fails, or an LLM response is nonsensical
  • Schemas that ensure compatibility between agents and toolchains

This would unlock composability and enterprise-grade trust.

From Framework to Investment Thesis

For investors looking to deploy capital in the AI infrastructure space, this framework offers a guiding map.Wherever an old software engineering principle hasn’t yet been adapted to the AI paradigm that’s where opportunity lies.

Let’s name a few categories likely to mature into investable verticals:

  • AI-native observability tools
  • Prompt + agent lifecycle managers
  • AI interface standards and protocols
  • Persistent agent identity layers
  • Policy engines for AI decisions and output safety
  • Testing frameworks for reasoning, tool use, and hallucination handling

These are not just engineering problems, but also valid ideas for new start-ups in the field.

The Innovator’s Playbook: Reimagine, Don’t Reinvent

If you’re a founder, the best ideas may not come from asking “What can AI do that’s new?” but instead: “What proven practice from engineering hasn’t yet made it into AI?”

  • How did we scale the web?
  • How did we debug distributed systems?
  • How did we manage developer workflows at scale?

These answers already exist in a different way. The AI world just hasn’t fully caught up yet.

Conclusion: Invest in Memory, Not Just Magic

The hype cycle around AI will continue to push “magical” demos, one-off breakthroughs, and zero-to-one illusions. But beneath the surface, a new software stack is being born, and it’s one that still follows timeless engineering patterns.

Modern AI systems need the same things we’ve always needed:

  • Abstractions
  • Contracts
  • Versioning
  • Monitoring
  • Control

The difference? Now we’re building it for agents, not users. For prompts, not code. But the discipline is still software engineering.

Those who understand and act on it will define the next generation of AI infrastructure.