Your AI Strategy Has an Architecture Problem: Milan Parikh on Building Event-Driven Infrastructure for AI That Actually Works

Your AI Strategy Has an Architecture Problem

The models work. The cloud infrastructure is in place. The data teams are capable. Yet the results from AI are still wanting, and the timelines are slipping. This is not an unusual situation; it is the expected reality for most enterprises deploying AI and machine learning in production environments. The problem is rarely with the AI itself. It is with the architecture feeding into AI. Most organizations have invested in a machine learning platform and a cloud-based data warehouse without first solving a more fundamental problem: how does data move from one system to another, with what latency, and with what guarantee? Until those questions are answered, no amount of model tuning will close the gap.

The answer is an event-driven architecture (EDA), an architecture style that has finally become the defining differentiator between organizations seeing AI success and those experiencing continued disappointment.

Milan Parikh is a Lead Enterprise Data Architect with 15 years of experience in cloud-native platforms, enterprise integration architecture, and AI-ready data infrastructure. A Fellow of the British Computer Society (FBCS) and Secretary of the BCS South Walse Branch and Enterprise Architecture Group, Milan specializes in Microsoft Dynamics 365, Azure iPaaS, Power Platform, and Microsoft Fabric. He is an International Keynote Speaker and Session Chair at IEEE World Conferences on AI, a judge at the CES Innovation Awards, and author of multiple research papers published in IEEE Xplore.

The Real Root Cause of AI Underperformance

"Many organizations assume the AI model is at fault when results disappoint," says Milan. "But the model is only as good as the data being fed to it and that data is usually stale, incomplete, or arriving too late to matter."

Milan emphasizes that until organizations understand how data moves through their systems at what latency, with what guarantees, and with clear domain ownership, AI will always operate on stale or untrustworthy data. No amount of model tuning closes that architectural gap.

What Changes When You Build on Events

Traditional architectures operate on a request-response model: systems ask other systems for data. Event-driven architectures flip this model entirely. Systems publish events facts that something has happened, and other systems subscribe to those streams. A payment processed, a patient record updated, an inventory level breached: each is a durable, replayable event on a stream.

Platforms like Apache Kafka, Azure Event Hubs, and AWS Kinesis handle these streams at enterprise scale. An AI model consuming a payment processing stream does not need to query the payment system directly; it simply reads the stream. It requires no knowledge of the source system's schema, availability window, or internal implementation.

"The same stream that feeds real-time inference feeds model training," Milan explains. "Online and offline features share the same lineage. That is the capability gap organizations discover the hard way after their first production AI failure."

Three Places Where AI Hits the Architecture Wall

Three Places Where AI Hits the Architecture Wall

Whether in healthcare data platforms, financial services integration, or manufacturing modernization, Milan identifies three recurring failure points for organizations without event-driven infrastructure:

  • Latency that makes inference irrelevant. Fraud detection, dynamic pricing, and clinical decision support require subsecond-to-second data. A batch pipeline refreshing every few hours cannot be tuned or patched to support these use cases; it must be rebuilt. Research into federated multi-agent systems for real-time fraud detection confirms that latency at the data layer, not model complexity, is the binding constraint in production environments (Parikh et al., "FraudSentinel: Federated Multi-Agent Reinforcement Learning for Privacy-Preserving Cross-Marketplace Fraud Detection," IEEE, 2025; Parikh et al., "TrustGraph: Federated Graph Neural Networks for Cross-Platform Trust and Fraud Propagation Analysis," IEEE, 2025).
  • Integration debt that defeats every new consumer. Each new AI model added to a traditional architecture creates a new direct dependency, a new contract to maintain, and a new failure point to monitor. Multiply across ten models, and the complexity becomes unmanageable. With EDA, the event stream serves as a single contract any consumer can subscribe to with no new integration required.
  • Training data that misrepresents production reality. Models trained on warehouse data see a pre-aggregated, delayed, and sanitized view of the truth. Event streams provide the raw reality, including corrections, retractions, and out-of-order messages. Models trained on event data perform better in production because they reflect what the system actually experienced.

Four Principles That Separate Good EDA from Expensive Noise

Poorly implemented event-driven architecture creates its own category of integration debt, undocumented streams, inconsistent schemas, unclear ownership, and absent governance. Milan has distilled four principles to prevent this:

  • Design events around business facts, not system states. "Order Placed" or "Patient Discharged" is the right framing. "Row Updated" or "Status Changed" is not. Business facts are self-explanatory, durable as systems evolve, and broadly useful across consumers.
  • Assign clear domain ownership to every stream. "Payments" owns "Payment Events." "Customers" owns "Customer Lifecycle Events." No co-ownership, no committee-authored schemas. Shared ownership is the single biggest source of inconsistent data in EDA implementations.
  • Enforce schema contracts at publish time. Use a schema registry such as Confluent or AWS Glue to enforce backward and forward compatibility before events reach the stream. Without this, AI feature pipelines break silently when a field is renamed. Comparative research on multi-model versus single-model database architectures reinforces why authoritative schema ownership at the source is a non-negotiable foundation (Parikh et al., "Unified Data Management: A Comparative Study of Multi-Model vs Single-Model Database Architectures," IEEE, 2025).
  • Connect the AI feature store directly to the stream. Not a downstream database. Not a warehouse copy. The stream itself. This architectural choice is what closes the gap between online inference features and offline training features, and it is only achievable when the stream is treated from day one as the authoritative source of truth.

How to Start Without Starting Over

"Estate-wide migration to event-driven architecture in a single program is exactly how organizations stall," says Milan. "The path forward is narrower, but faster."

Milan advises organizations to identify two or three business domains where data latency is already a documented constraint for an AI application in the current program of record roadmap. These become the starting points, not because they are easiest, but because they have a deliverable attached to them.

For legacy systems that cannot naturally publish events, change data capture tools such as Debezium can extract row-level changes from database transaction logs and stream them to the event platform with no application code modification required. This single capability eliminates the most common justification for why legacy systems block EDA adoption. Work on reinforcement learning for dynamic workflow optimization in CI/CD pipelines demonstrates that adaptive pipeline execution is achievable even within existing infrastructure constraints (Parikh et al., "Reinforcement Learning for Dynamic Workflow Optimization in CI/CD Pipelines," IEEE, 2025).

The approach is incremental by design. Build the first AI feature pipeline from the stream. Compare inference latency and accuracy against the batch-fed baseline. The resulting business case funds the next domain, and the one after. The schema registry and event catalog grow as shared infrastructure over time governance implemented one domain at a time, before the platform can accumulate the undocumented backlog it was built to replace.

Strategic Guidance for Enterprise and Technology Leaders

As organizations invest in AI transformation, Milan offers a practical architectural roadmap that leaders can begin executing today.

Diagnose before you optimize

If your AI outcomes are disappointing, audit the data pipeline before adjusting the model. Identify the latency at each stage, map who owns each data flow, and determine whether training and inference data share a common lineage. In most cases, the gap is in the architecture, not the algorithm.

Choose high-value, high-latency pain points to start

Do not attempt a full platform migration. Select domains, fraud detection, clinical decision support, supply chain sensing where latency has a documented business cost. These are your highest-ROI starting points and the clearest business cases for continued EDA investment.

Invest in governance from the first domain

Stand up a schema registry and event catalog before the second domain joins the platform. Governance retrofitted to a mature event environment is exponentially harder than governance built into the foundation. The overhead of doing it early is minimal; the cost of not doing it compounds rapidly.

Treat event streams as first-class product assets

Every event stream should have a named owner, a documented schema, an SLA, and a known set of consumers. Streams without ownership become the undocumented data debt of tomorrow. When streams are treated with the same rigor as APIs or database schemas, the entire AI platform becomes more trustworthy and more maintainable.

Align business and engineering leadership on the architectural imperative

EDA adoption requires investment decisions that span data engineering, platform architecture, and product roadmaps. Business leaders need to understand that improving AI outcomes is not solely a model problem; it is an infrastructure and governance decision. The organizations seeing AI ROI today made that investment 12 to 18 months ago.

Milan is clear that the shift to event-driven architecture is not a matter of if, but when. The question for every enterprise is whether that transition is proactively designed, governed, and delivering returns, or reactive, executed under pressure to rescue an AI program already in distress.

"The models, the people, and the budget are not the problem. The architecture is. Get the architecture right, and the rest of the AI strategy becomes a heck of a lot more doable."

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion