FAR Labs Opens FAR AI Node Registrations to Tap 3B Idle GPUs

Adobe Stock
Adobe Stock

More than 3 billion GPUs sit idle worldwide, and the race to secure AI compute is pushing more companies to explore innovative infrastructure models that can tap idle GPU capacity across consumer and enterprise machines. Joining the fray, the AI-native building infrastructure FAR Labs announced on March 30 that it moved into closed testing with its distributed AI inference network, FAR AI, to bring a share of that dormant capacity online for AI inference. FAR Labs has also opened node registrations after the completion of core development, giving GPU owners a chance to join early before approved operators come online.

Powered by Dizzaract, FAR AI connects consumer and enterprise GPUs into a single distributed network that handles AI inference workloads. The platform can route requests to the most suitable nodes for performance and reliability, while node operators can earn income when their machines process verified workloads. Early registrants will receive priority onboarding, and the company says developers will gain API access in the second quarter.

Targeting Idle GPU Capacity

FAR AI enters a market where companies are looking for more flexible ways to access compute as demand for AI services expands. The platform offers a way to turn underused GPUs into paid infrastructure for developers and AI businesses. The model can widen access to compute by drawing from hardware that already exists across consumer and enterprise environments, according to the FAR Labs team.

"FAR AI is built to make participation in AI infrastructure more open and more practical," said Ilman Shazhaev, founder and CEO of Dizzaract. "Useful compute already exists in places far beyond traditional data centers. FAR AI brings that capacity online in a way that is practical for operators and immediately usable for developers."

Node operators will be able to register hardware through the FAR AI website, where users can select their GPU type and review estimated monthly earnings based on available compute capacity. The platform supports most GPU types, which broadens the potential supply side of the network as the rollout moves forward.

Solving the Memory Barrier

FAR AI is focused on inference, the stage where trained AI models generate answers, predictions and outputs for users and applications. The network routes workloads across a broad range of hardware and matches models to the compute environment that best fits the job. This turns a fragmented global supply of GPUs into a single, scalable resource.

To handle larger jobs, FAR AI can group compatible nodes together when workloads require more memory. This allows the platform to support heavier inference tasks across distributed hardware. That capability could make the system more relevant for developers whose workloads exceed the limits of a single consumer-grade machine.

The FAR AI network uses isolated execution and encrypted communication, alongside proof-of-compute mechanisms that confirm workloads were processed securely and that network activity remains accountable.

Testing Phase Focuses on Performance and Dev Workflows

Closed testing with selected partners is now underway, and FAR Labs notes the current phase is focused on live performance and developer workflows. The process will help refine orchestration before wider access begins.

Developer access is scheduled for the second quarter. FAR Labs says teams will be able to use an API to run inference across the network and integrate the service into products and applications. The team aims to turn FAR AI into an infrastructure for startups and software builders that want access to distributed GPU compute without having to assemble the underlying network themselves.

FAR Labs is using the current testing phase to refine orchestration before opening wider access to the developer API later this year.

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion