After the SaaSpocalypse: Kyrylo Seliukov on What the New Enterprise Software Stack Actually Requires

Kyrylo Seliukov
Kyrylo Seliukov

In the first five days of February 2026, the software sector shed nearly $300 billion in market value. The trigger was not a recession, but a structural shift. The launch of agentic AI platforms capable of orchestrating entire business workflows without traditional user interfaces sent a clear signal: the per-seat licensing model that powered SaaS for two decades is becoming obsolete. The iShares Tech-Software ETF dropped over 23% year-to-date. IPOs froze. Salesforce hit a 52-week low. Analysts called it the SaaSpocalypse. But for the engineers who actually build enterprise software, the conversation is very different from what's playing out on trading floors. The real challenge isn't whether AI will replace SaaS, but how companies will redesign their product architecture to make AI-driven workflows reliable, scalable, and useful at enterprise scale.

Kyrylo Seliukov doesn't speculate about this—he does it every day. As a Senior Product Engineer at DXC Technology, he leads frontend architecture and architectural decision-making for AI-agent integration in large-scale enterprise systems used by one of the world's largest retail corporations.. His career reads like a timeline of exactly the transition the industry is now scrambling to make: from building SEC-compliant financial platforms on legacy .NET stacks at Donnelley Financial Solutions to halving application bundle sizes at fintech platform Paxos, to engineering design systems from scratch for British retail giant Tesco. He has seen the same shift from the startup side, too—as one of 21 judges at the NextGen Hackathon in France's Sophia Antipolis innovation cluster, he evaluated dozens of AI-driven projects and saw firsthand how wide the gap is between a compelling demo and a product that can survive real enterprise load. We asked him what the post-SaaS architecture actually looks like from the inside.

In early February 2026, the software sector lost nearly $300 billion in market value after the launch of agentic AI platforms. Markets are calling it the SaaSpocalypse. You build AI-agent interfaces for enterprise systems every day—does what's happening on Wall Street match the reality you see from the engineering side?

Partially. The market is right about the direction AI agents are changing how enterprise software works. That part is real. What's exaggerated is the speed. The headlines make it sound like someone flipped a switch and SaaS is dead. From where I sit, the reality is messier. Most enterprise systems weren't built for agentic workflows. They were built around the assumption that a human clicks buttons in a UI. Replacing that isn't a product update; it's a fundamental rearchitecture. The companies I work with are doing this now, and it takes serious engineering. So when I see that kind of market reaction in a single week, my honest reaction is: the market is pricing in a future that's real, but it's pricing it in much faster than the engineering can actually deliver.

Your career spans a very specific arc from building SEC-compliant financial platforms on legacy .NET at Donnelley Financial Solutions to leading microfrontend transitions in modern React-based enterprise systems. That's essentially the journey the entire industry is now being forced to make in compressed time. How does having lived through it shape how you think about the architectural shift enterprises need right now?

It gave me a very specific lens. At Donnelley Financial Solutions, I spent five years inside .NET monoliths—SQL databases, strict compliance, tightly coupled components where changing one thing risked breaking three others. That's what most enterprise software still looks like today. The critical lesson was that monoliths don't fail because the code is bad. They fail because they can't evolve fast enough. When every feature lives in one deployable unit, adding anything new, especially something as unpredictable as an AI agent, becomes high-risk surgery. That's why the move to microfrontends matters so much right now. When you decompose a frontend into independent modules, each team can build, test, and deploy its piece without waiting for everyone else. You can plug in an AI-powered component without touching the rest of the application. That's not a theoretical benefit—it's the difference between shipping an AI feature in two weeks versus six months. The companies that made this architectural investment before the current panic are the ones that can actually move. Everyone else is trying to retrofit modularity under pressure, which is significantly harder.

That arc sounds clean on paper, but in practice, migrating from legacy to modern systems is where most companies get stuck. At Sientia, you automated deployment pipelines and cut release cycles dramatically. At Unanet, you integrated a modern React application into a legacy portal used by government contractors, a system that couldn't afford downtime. What does a transition like that actually look like from the inside, and why do so many enterprises fail at it?

It looks like a lot of unglamorous, disciplined work. At Sientia, the key wasn't some brilliant new framework; it was automating what people were doing manually. Deployments were slow because each one required multiple manual steps. We built a one-button deployment process. That's it. No magic. But the effect was a 75% reduction in deployment time, which meant the team could ship faster, test faster, and iterate faster. Everything downstream improved. At Unanet, the challenge was different. We had a working legacy portal that government contractors relied on daily. You can't tell them "sorry, we're upgrading, come back in three months." So we embedded a React application into the existing product, running modern code inside a legacy shell. The old system stayed stable while the new functionality grew alongside it. Why do enterprises fail at this? Usually, because they underestimate the political side. The technical path is clear enough. But someone has to make the decision to invest in infrastructure that users won't directly see. That's a hard sell internally, and it's why so many companies skip it and then wonder why their AI integration falls apart six months later.

At DXC Technology, you lead frontend architecture for AI-agent integration in systems serving one of the world's largest retail groups. What does building an interface for an AI agent actually look like, and how is it different from building for a human user?

When you build for a human, you design a predictable flow. The user clicks here, sees this, and fills out that. You control the sequence. With an AI agent, you lose that predictability. The agent generates responses of varying length, structure, and content. It might return a table, a paragraph, or a follow-up question. It might take two seconds or twenty. Your frontend has to handle all of that gracefully. So the architecture shifts from "render this screen" to "render whatever comes back, reliably and fast." That means rethinking state management, error handling, and loading patterns from the ground up. There's also a UX challenge that's unique to AI. With a traditional interface, the user trusts the system because they control it. With an AI agent, the system acts semi-autonomously. The interface has to communicate what the agent is doing, why, and give the user a way to intervene. That trust layer is new, and most teams underestimate how much frontend engineering it requires.

In December 2025, you served as one of 21 judges at the NextGen Hackathon in Sophia Antipolis, organized under Université Côte d'Azur and sponsored by companies like Atlassian and Orange. You evaluated dozens of AI-driven projects there. What separated the teams that impressed you from those that didn't—and does that experience inform how you now evaluate which AI products have a real chance at enterprise adoption?

The clearest dividing line was the direction of thinking. The weaker teams built impressive technology and then looked for a problem it could solve. The stronger teams started with a specific, painful business problem and applied the simplest AI solution that could address it. One of the winners, for example, built an AI-powered 3D visualization for e-commerce—directly targeting the problem of high return rates in online retail. That's a real cost that real businesses measure in real money. The technology was sophisticated, but the pitch was about the problem, not the model. This absolutely carries over to enterprise. When I evaluate an AI feature in my daily work, the question is never "Is this technically impressive?" It's: "Does this survive contact with real users, real data, and real load?" The hackathon reinforced something I already suspected—the gap between a working demo and a deployable product is where most AI projects quietly die.

You've dealt with the scalability problem hands-on—you cut bundle size in half at Paxos and built a design system from scratch for Tesco. As companies rush to bolt AI agents onto their existing platforms, what are the most common architectural mistakes you see?

Three things come up repeatedly. First, no modularity. Companies try to add AI features to a monolithic frontend. Every new feature increases the bundle, slows the load, and creates dependencies that make the whole system fragile. At Paxos, the bundle was bloated not because anyone made a single bad decision, but because features had been added on top of each other without structural discipline. We cut it by 50% through code-splitting and lazy loading—separating what the user needs now from what can be loaded later. The same principle applies to AI integration: if your architecture can't load components independently, every new AI feature will slow down everything else. Second, no design system. When I built one from scratch for Tesco, it wasn't because design consistency is nice to have. It was because, without a shared component library, every team reinvents the same elements differently. Now multiply that by an AI-generated interface,s where consistency is even harder to maintain. A design system is the foundation. Third—no testing discipline. If you're going to rearchitect your frontend around AI agents, you need to know that your changes don't break what's already working. Without serious test coverage, every refactor is a gamble. You can't move fast on new architecture if you're afraid to touch the old one.

You've mentored engineers throughout your career—at Tesco, you trained junior developers in React and JavaScript, and in your current role, you conduct code reviews and guide the team on architectural decisions. The industry is talking a lot about new tools and frameworks, but far less about the people who have to build with them. Are there enough engineers who can actually execute this transition, and how do you develop them?

No. And I don't think the gap is where most people assume. There's no shortage of engineers who can write React or learn a new framework. The shortage is in engineers who understand systems—who can look at an enterprise application and see the architecture, the dependencies, the failure points, not just the feature they're building today. That's the skill the AI transition requires, and it takes years to develop. You can't fast-track it with a boot camp. What you can do is create the conditions for it. In my teams, I use code reviews not just to catch bugs but to explain the "why" behind architectural decisions. When a junior engineer submits code that works but introduces a hidden dependency, the review is a teaching moment. Over time, they start seeing the system, not just the ticket. The other thing I've learned is that you have to give people real responsibility early—not just isolated tasks, but ownership of components that interact with other parts of the system. You can talk about architecture in a lecture for hours, and it won't stick. Hand someone a module that breaks if they don't think about its neighbors, and they learn in a week.

If a CTO reading this is trying to figure out where to start, not in theory, but on Monday morning. What's the first thing they should look at in their current stack?

Look at your deployment pipeline. Not your AI strategy, not your model selection—your deployment pipeline. If you can't ship a change to production quickly and safely, nothing else matters. You'll build an AI feature, and it'll sit in staging for three weeks because the release process is manual or fragile. I've seen this pattern at multiple companies. The ones that moved fastest on meaningful product changes were always the ones that had already invested in deployment automation. If your team can deploy with one click and roll back in minutes, you're ready to experiment with AI integration. If they can't, that's your Monday morning problem. Everything else: modular architecture, design systems, AI-agent interfaces—depends on being able to iterate quickly. Fix the pipeline first.

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion