Beyond the Monolith: Vladyslav Larin on Networked Intelligence

In this interview, Vladyslav Larin, Co-founder and CTO of Fortytwo, explains the origins of Fortytwo's network-based approach to AI architecture, rooted in its mission as a research lab building AGI from interconnected models using user-contributed compute.

Vladyslav Larin
Vladyslav Larin

Drawing from years of research in distributed computing and practical experience with AI, Larin shares why decentralized networks represent the future of artificial intelligence—moving from corporate-controlled behemoths to distributed swarms that leverage global computing resources.

Why Decentralized AI?

What initially drew you to decentralized approaches in AI?

Understanding the backprop learning algorithm and the perceptron, the first artificial neuron model, was a defining moment. But with backprop, I saw this huge bottleneck in having one centralized entity governing the learning. I couldn't get behind the idea of a single, super-monolithic model that will eventually be the 'God model.' I felt it an ineffective, even unnatural way to AI.

That's what got me into decentralized algorithms. During my PhD [in Applied Mathematics], I worked on making distributed agent systems. These allow utilization of a hundred percent of available resources through decentralized coordination. I always saw the potential and had impressive results. While centralization simplifies practical implementation, for open-ended problems, it's better to have concurrency or multiple pieces working on it.

Formative Experiences with AI

You co-founded Temporal, working on Conversational AI and Virtual Beings. What did you learn about AI-human interaction?

Through human-AI interfaces, we observed that humans are inherently irrational. For AI to communicate effectively, it can't just be accurate or logical—that's not enough. It needs to be proactive and emotionally intelligent. We found that the more we invested in emotional intelligence—say, predicting a person's needs or intent—the more effective the engagement. This insight is reflected in today's state-of-the-art LLM chat APIs, which use prompt refinement pipelines to insert user context, unwrap prompts into clearer intents, and add follow-up questions to deepen engagement.

You developed Spatial LLMs and Generative 3D AI for NEOM. How did this expand your understanding of AI capabilities beyond text-based models?

Multimodality has been a key focus and remains so. It showed us how AI can perceive the world more objectively. For example, video prediction tasks allow us to build accurate models of reality—something text struggles with due to its implicit biases and evolutionary limitations.

At NEOM, we trained compact spatial models with basic compute, revealing their ability to generate and classify spatial environments. Language models, by contrast, often require massive scale to be relevant because of their non-deterministic nature, relying heavily on reasoning and prediction. Multimodality provides a higher-quality signal, which I believe is crucial for achieving true AGI.

How did these experiences factor into your work with Fortytwo?

These projects really brought scaling challenges into sharp focus. When we were working on conversational AI, we could see how quickly costs escalated as we tried to make interactions more natural and context-aware. The same with multimodal systems—the more dimensions you add, the more compute you need. What became clear was that if we want AI to truly fulfil its potential, we need a fundamentally different approach to how AI compute is organized and distributed. This is where the ideas behind Fortytwo began to take shape.

Reimagining AI Infrastructure

How does Fortytwo's decentralized architecture differ from traditional AI systems?

Almost all traditional focus is on training bigger models, with little effort spent to evaluate their real performance, because current benchmarks are practically obsolete on release. With Fortytwo, instead of splitting a single model across nodes or replicating data between them, we treat every AI node as a black box that independently produces its inference. Each node can also run custom tools. With this setup, thousands of unique models can coexist and collaborate.

Fortytwo isn't just about AI inference, it's more like an AI model protocol where nodes evolve and develop, enhancing overall model quality. What we have is a super heterogeneous architecture, uniting strong elements at different reasoning points to deliver final answers.

This network protocol is designed to allow AGI to emerge not from a single model, but from the collaboration among thousands of evolving models.

Does decentralized inference provide a superior path to AGI? Will centralized and decentralized approaches coexist?

Decentralized inference unlocks nearly unlimited compute by distributing the load across all available resources, including consumer devices that opt in. It also offers strong privacy guarantees through trusted execution environments—something centralized providers can only aspire to. Centralized API servers carry risks of data being collected, sold, or used to train models without consent.

A decentralized solution provides algorithmic security, alongside scalability and pricing far lower than centralized data centers. For most tasks, decentralized inference is the better path. Centralized approaches may still serve niche users, but their relevance will likely diminish as we address decentralized coordination challenges, which we are working on at Fortytwo.

Fortytwo has received over 30,000 node validator applications. What does that say about the demand for decentralized AI?

The node validator applications make us very optimistic. They show a strong community desire to take ownership of AI inference. At Fortytwo, node operators don't just rent out compute; they can improve model quality and enrich them with their data. This resonates deeply with machine learning engineers and data scientists who currently lack simple ways to contribute to state-of-the-art AI development. Fortytwo gives them that opportunity.

The Future and Fortytwo

Fortytwo recently launched its devnet. What comes next?

The devnet is a major milestone, enabling practical applications. We're starting with a high-quality synthetic reasoning dataset for next-generation reasoning LLMs within the framework. As the system scales, we'll tackle long reasoning chains with tens of thousands of nodes, addressing scientific questions and open science problems—an exciting use case that shows how AI can accelerate research. We're also incorporating multimodality into the architecture and plan to offer an API for developers and chat interfaces for users.

These interfaces will eventually reflect the swarm's collective reasoning, providing a window into how distributed intelligence can solve problems through collaboration.

What's Fortytwo's ultimate vision for AI evolution?

Looking at the current landscape with its centralized models requiring enormous resources, Fortytwo's architecture stands out. We're not just distributing a single model; we're creating an ecosystem where diverse AI models collaborate and validate each other. This approach solves critical problems: by utilizing underused resources worldwide, we achieve dramatically improved accuracy through peer review and democratize AI ownership.

As AI becomes embedded in our lives, centralized models' limitations will become apparent. Fortytwo's architecture reframes AI not as a centralized commodity but as an open, research-driven process, one where intelligence emerges from a network of contributions rather than a single source of truth.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion