
One day after Figure AI broadcast an eight-hour livestream of its humanoid robots completing full factory shifts without human intervention — the company's most ambitious real-world test to date — the California-based startup's accelerating timeline throws the entire humanoid sector into sharp relief. The May 13 demonstration, running on the company's Helix-02 AI system, placed Figure AI robots on package-sorting conveyor belts for a continuous eight hours, processing barcoded packages at speeds the company says match human performance. For manufacturers, warehouse operators, and workers whose roles sit directly in the path of automation, the pace of progress just became harder to dismiss.
Robots Clock a Full Shift — at Human Speeds
On May 13, 2026, Figure AI CEO Brett Adcock posted on X: "Watch a team of humanoid robots running a full 8-hr shift at human performance levels. This is fully autonomous running Helix-02." The use case was small-package sorting: each Figure 03 robot had to detect a barcode, pick up the package, reorient it barcode-face-down onto a conveyor, and repeat — purely from camera pixels, with no pre-programmed motions. Adcock noted humans average roughly three seconds per package, and the robots now operate at comparable parity.
The demonstration extended a task Figure previously ran for one hour, pushing it to eight. Multiple robots were networked to coordinate conveyor uptime, and all AI inference ran entirely onboard each robot — no cloud connection required.
One Neural Network Controls 35 Joints in Real Time
Helix was unveiled in February 2025 — weeks after Figure severed its collaboration agreement with OpenAI. Adcock said the company had achieved a "major breakthrough on fully end-to-end robot AI, built entirely in-house," making the external partnership unnecessary.
The system is built on a two-layer architecture. System 2 (S2) is an internet-pretrained Vision Language Model running at 7–9 Hz that handles scene understanding, language comprehension, and goal sequencing — answering the question: what needs to be done? System 1 (S1) is a visuomotor policy running at 200 Hz that translates S2's reasoning into precise joint commands across all 35 degrees of freedom — answering: how do I do it, right now? Both layers run on embedded, low-power onboard GPUs, requiring no cloud connection.
At launch, Figure described Helix as several firsts: the first VLA model to deliver continuous control of an entire humanoid upper body including individual fingers; the first to operate simultaneously on two robots from a single set of neural network weights; and the first to govern all behaviors — from opening refrigerators to handing objects between robots — without task-specific fine-tuning. The training dataset comprised approximately 500 hours of teleoperated demonstrations across multiple robots and operators.
Eight Hours of Data Yields a Warehouse-Ready Policy
A week after the original grocery demonstration, Figure extended Helix to logistics, showing robots triaging packages on conveyor belts — handling rigid boxes and deformable bags, orienting shipping labels for scanning. Technical improvements included stereo vision for 3D depth awareness, multi-scale visual representation for precise manipulation, and a test-time speed-up the team called "sport mode" that allows faster-than-demonstrator execution. The team also showed that just 8 hours of carefully curated demonstration data could produce a dexterous, flexible policy.
Helix-02 Adds Legs and Eliminates 109,000 Lines of Code
In January 2026, Figure unveiled Helix-02, extending control from the upper body to the entire robot — legs, torso, arms, and fingers — as a single unified system. A new foundation layer called System 0 (S0) replaced what the company said was more than 109,000 lines of hand-engineered C++ locomotion code with a neural controller trained on over 1,000 hours of human motion data. The result is a robot that walks, balances, and manipulates objects as one continuous behavior, rather than a sequence of stitched-together controllers.
The flagship demonstration: a Figure robot completed a four-minute end-to-end task — walking to a dishwasher, unloading dishes, navigating across a kitchen, stacking items in cabinets, then reloading and starting the dishwasher — with no human intervention and no resets. Figure claims this is the longest-horizon autonomous task ever completed by a humanoid robot.
On May 8, 2026, two Helix-02-equipped robots reset a full bedroom in under two minutes: opening doors, hanging clothes, pushing a chair under a desk, taking out trash, and working together to make a bed — with no shared planner or message-passing between the two machines. Each robot inferred its partner's intent from motion alone. Figure says this is the first demonstration of a single learned neural network performing multi-robot collaborative locomanipulation directly from pixels to actions.
Figure 03's Fingertip Sensors Detect the Weight of a Paperclip
The hardware enabling these tasks is Figure 03, unveiled in October 2025. Each fingertip carries a tactile sensor capable of detecting forces as small as three grams of pressure — enough to register a paperclip resting on a finger, and fine enough to distinguish a secure grip from an incipient slip. The vision system delivers twice the frame rate, one-quarter the latency, and a 60 percent wider field of view per camera compared with Figure 02. Palm cameras enable close-up manipulation. The robot also includes 10 Gbps millimeter-wave data offload, allowing the fleet to upload terabytes of operational data for continuous model improvement.
Figure's BotQ manufacturing facility has increased production from one Figure 03 per day to one per hour — a 24-fold throughput improvement in under 120 days — with over 350 third-generation robots delivered so far.
Meta Acquires a Robotics Startup as Big Tech Accelerates
Figure's eight-hour demonstration lands in the middle of an escalating competitive push. On May 1, 2026, Meta acquired Assured Robot Intelligence (ARI), a startup building foundation models for humanoid robots to perform household tasks. ARI's co-founders joined Meta Superintelligence Labs, the same division that houses Meta's AGI research. Meta had previously set up a robotics division inside Reality Labs and hired Marc Whitten — former CEO of General Motors' Cruise robotaxi unit — to lead it.
Elsewhere, Apptronik raised $520 million in February 2026 and is deploying its Apollo robot with Mercedes-Benz and GXO Logistics, with Google DeepMind providing AI capabilities through the Gemini Robotics platform. Tesla's Optimus program continues development, though the head of Tesla's robotics division stepped down mid-2025 and Autopilot chief Ashok Elluswamy now oversees the project.
What an Eight-Hour Shift Means for Warehouse and Factory Workers
Figure's core wager — that a single AI brain trained on language and vision can outperform thousands of hand-coded task demonstrations — is now being tested at factory timescales. An eight-hour autonomous shift, if replicable across diverse real-world environments, changes the economic calculus for any employer running a repetitive sorting or handling operation.
The open questions center on reliability outside controlled conditions: success rates across varying object types, failure recovery when packages are misoriented or conveyor timing shifts, and scaling beyond small fleets. Figure's broader commercial deployments — including earlier work at BMW's Spartanburg, South Carolina facility, where Figure 02 robots contributed to the movement of more than 90,000 parts — will provide the ground truth that lab demonstrations cannot.
For workers in logistics and light manufacturing, the timeline has compressed. For investors and competing robotics teams, May 13 set a new public benchmark: a full human-length shift, running on learned neural weights, with zero human intervention.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




