Most AI Is Built to Extend the Past. Vertus Built One That Reasons About the Present. Here’s Why That Matters.

Vertus
Vertus

The technical story of modern AI is, at its core, a story about prediction. Very sophisticated, very large-scale, genuinely impressive prediction. But still a prediction nonetheless.

Almost all AI systems look at what has been. They identify the patterns. And they generate what comes next based on statistical probability. When the world cooperates, which is to say when it keeps looking like the training data, the output is useful enough to feel like intelligence.

The problems start the moment the world stops cooperating, and you're quickly forced to find out what the system you once trusted is actually built on.

At the foundation of almost every AI system you interact with is something called a Markov process. Given the current state, predict the most probable next state. One token leads to another. The output is generated step by step, each step conditioned on what came before. It works because language and structured data have statistical regularities. The pattern is usually right. Until suddenly, it isn't.

When the structure underneath a problem changes, a Markov system has no mechanism to step outside of its own conditioning, its own training. It just keeps on doing what it's always done, generating the most probable continuation. Fluently. Coherently. But sadly, often dangerously, and based on a model of reality that stopped being able to be applied some time ago. The industry calls the output hallucination, which is a gentler name than the situation it generates truly deserves. It's not a drift. It's not some sort of summertime daydreaming. It's the system doing exactly what it was built to do, even after the assumptions that justified it doing what it was built to do have stopped working. It's sort of like looking at a tornado and calling it a light breeze because the system has never heard of a tornado before. And just like a tornado, the results can be highly destructive.

Vertus was built from a different starting point.

Rather than training a fixed model and deploying within its boundaries, the system it devised constructs what the company calls a neural topology for each problem it encounters. A cognitive structure shaped by the actual demands of the current situation rather than by what previous situations looked like. When complexity increases, different reasoning modes engage. When contradiction appears, it's actually looked at with new eyes, considered, reasoned, and resolved rather than averaged away. And its memory isn't retrieved and then later stuck on after the thinking's done. It's integrated into the formation of the reasoning from the very beginning.

When the environment shifts, when the world changes, the system doesn't look to build on prior static training and historical assumptions. It recognizes the new shift and rebuilds. And when it hits genuine uncertainty, it acknowledges the gap rather than filling it with a meaningless jumble that sounds right.

The practical test came in 2025. Live financial markets. Real capital. Conditions that included the largest two-day market loss in history, when April's tariff shock dissolved correlations that had held for years and left most AI investment firms trying to uselessly use pre-trained logic that no longer worked or even remotely fit the current conditions.

Vertus posted a 51.15 percent annual return for 2025 on a recorded daily trading volume of just over a billion dollars. Independently audited before any public announcement. Sharpe ratio of 2.13. Eleven winning months. Maximum drawdown of 9.91 percent, recovered in nine days.

The company was built by Julius Franck, Alex Foster, and Michal Prywata, whose backgrounds span quantitative systems architecture, algorithmic trading infrastructure, and cross-domain engineering across aerospace, medical robotics, and agricultural biotechnology. Their intelligence reflects perspectives that institutional finance alone would never have produced, and looking back, were unable to produce.

Their cognitive reasoning platform is now accessible via API beyond finance. Healthcare. Scientific research. Supply chain management. Infrastructure. Every domain where the structure of the problem changes while the system is working in the real world, and where the cost of working with a broken pattern is immediate and real.

The benchmark scores for static AI will no doubt keep going up. It's pre-ordained.

The question of whether those systems can actually reason when they need to has an entirely different answer.

Now, there's an audited balance sheet that proves which answer is right.

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Tags:AI
Join the Discussion