Robots that learn on the job: AgiBot tests reinforcement learning in real-world manufacturing.
Updated
January 8, 2026 6:34 PM

A humanoid robot works on a factory line, showcasing advanced automation in real-world production. PHOTO: AGIBOT
Shanghai-based robotics firm AgiBot has taken a major step toward bringing artificial intelligence into real manufacturing. The company announced that its Real-World Reinforcement Learning (RW-RL) system has been successfully deployed on a pilot production line run in partnership with Longcheer Technology. It marks one of the first real applications of reinforcement learning in industrial robotics.
The project represents a key shift in factory automation. For years, precision manufacturing has relied on rigid setups: robots that need custom fixtures, intricate programming and long calibration cycles. Even newer systems combining vision and force control often struggle with slow deployment and complex maintenance. AgiBot’s system aims to change that by letting robots learn and adapt on the job, reducing the need for extensive tuning or manual reconfiguration.
The RW-RL setup allows a robot to pick up new tasks within minutes rather than weeks. Once trained, the system can automatically adjust to variations, such as changes in part placement or size tolerance, maintaining steady performance throughout long operations. When production lines switch models or products, only minor hardware tweaks are needed. This flexibility could significantly cut downtime and setup costs in industries where rapid product turnover is common.
The system’s main strengths lie in faster deployment, high adaptability and easier reconfiguration. In practice, robots can be retrained quickly for new tasks without needing new fixtures or tools — a long-standing obstacle in consumer electronics production. The platform also works reliably across different factory layouts, showing potential for broader use in complex or varied manufacturing environments.
Beyond its technical claims, the milestone demonstrates a deeper convergence between algorithmic intelligence and mechanical motion.Instead of being tested only in the lab, AgiBot’s system was tried in real factory settings, showing it can perform reliably outside research conditions.
This progress builds on years of reinforcement learning research, which has gradually pushed AI toward greater stability and real-world usability. AgiBot’s Chief Scientist Dr. Jianlan Luo and his team have been at the forefront of that effort, refining algorithms capable of reliable performance on physical machines. Their work now underpins a production-ready platform that blends adaptive learning with precision motion control — turning what was once a research goal into a working industrial solution.
Looking forward, the two companies plan to extend the approach to other manufacturing areas, including consumer electronics and automotive components. They also aim to develop modular robot systems that can integrate smoothly with existing production setups.
Keep Reading
The focus is no longer just AI-generated worlds, but how those worlds become structured digital products
Updated
February 20, 2026 6:50 PM

The inside of a pair of HTC VR goggles. PHOTO: UNSPLASH
As AI tools improve, creating 3D content is becoming faster and easier. However, building that content into interactive experiences still requires time, structure and technical work. That difference between generation and execution is where HTC VIVERSE and World Labs are focusing their new collaboration.
HTC VIVERSE is a 3D content platform developed by HTC. It provides creators with tools to build, refine and publish interactive virtual environments. Meanwhile, World Labs is an AI startup founded by researcher Fei-Fei Li and a team of machine learning specialists. The company recently introduced Marble, a tool that generates full 3D environments from simple text, image or video prompts.
While Marble can quickly create a digital world, that world on its own is not yet a finished experience. It still needs structure, navigation and interaction. This is where VIVERSE fits in. By combining Marble’s world generation with VIVERSE’s building tools, creators can move from an AI-generated scene to a usable, interactive product.
In practice, the workflow works in two steps. First, Marble produces the base 3D environment. Then, creators bring that environment into VIVERSE, where they add game mechanics, scenes and interactive elements. In this model, AI handles the early visual creation, while the human creator defines how users explore and interact with the world.
To demonstrate this process, the companies developed three example projects. Whiskerhill turns a Marble-generated world into a simple quest-based experience. Whiskerport connects multiple AI-generated scenes into a multi-level environment that users navigate through portals. Clockwork Conspiracy, built by VIVERSE, uses Marble’s generation system to create a more structured, multi-scene game. These projects are not just demos. They serve as proof that AI-generated worlds can evolve beyond static visuals and become interactive environments.
This matters because generative AI is often judged by how quickly it produces content. However, speed alone does not create usable products. Digital experiences still require sequencing, design decisions and user interaction. As a result, the real challenge is not generation, but integration — connecting AI output to tools that make it functional.
Seen in this context, the collaboration is less about a single product and more about workflow. VIVERSE provides a system that allows AI-generated environments to be edited and structured. World Labs provides the engine that creates those environments in the first place. Together, they are testing whether AI can fit directly into a full production pipeline rather than remain a standalone tool.
Ultimately, the collaboration reflects a broader change in creative technology. AI is no longer only producing isolated assets. It is beginning to plug into the larger process of building complete experiences. The key question is no longer how quickly a world can be generated, but how easily that world can be turned into something people can actually use and explore.