Ecosystem Spotlights

How AutoFlight’s Five-Tonne Matrix Could Solve the eVTOL Profitability Puzzle

AutoFlight’s five-tonne Matrix bets on heavy payloads and regional range to prove the case for electric flight

Updated

February 10, 2026 12:56 PM

A multiroter flying through a blue sky. PHOTO: UNSPLASH

The nascent industry of electric vertical takeoff and landing (eVTOL) aircraft has long been defined by a specific set of limitations: small payloads, short distances and a primary focus on urban air taxis. AutoFlight, a Chinese aviation startup, recently moved to shift that narrative by unveiling "Matrix," a five-tonne aircraft that represents a significant leap in scale for electric aviation.

In a demonstration at the company’s flight test center, the Matrix completed a full transition flight—the technically demanding process of switching from vertical lift-off to forward wing-born flight and back to a vertical landing. While small-scale drones and four-seat prototypes have become increasingly common, this marks the first time an electric aircraft of this mass has successfully executed the maneuver.

The sheer scale of the Matrix places it in a different category than the "flying cars" currently being tested for hops over city traffic. With a maximum takeoff weight of 5,700 kilograms (roughly 12,500 pounds), the aircraft has the footprint of a traditional regional turboprop, boasting a 20-meter wingspan. Its size allows for configurations that the industry has previously struggled to accommodate, including a ten-seat business class cabin or a cargo hold capable of carrying 1,500 kilograms of freight.

This increased capacity is more than just a feat of engineering; it is a direct attempt to solve the financial hurdles that have plagued the sector, specifically addressing the skepticism industry analysts have often expressed regarding the economic viability of smaller eVTOLs. These critics frequently cite the high cost of operation relative to the low passenger count as a barrier to entry.

AutoFlight’s founder and CEO, Tian Yu, suggested the Matrix is a direct response to those concerns. “Matrix is not just a rising star in the aviation industry, but also an ambitious disruptor,” Yu stated. “It will eliminate the industry perception that eVTOL = short-haul, low payload and reshape the rules of eVTOL routes. Through economies of scale, it significantly reduces transportation costs per seat-kilometer and per ton-kilometer, thus revolutionizing costs and driving profitability.”

To achieve this, the aircraft utilizes a "lift and cruise" configuration. In simple terms, this means the plane uses one set of dedicated rotors to lift it off the ground like a helicopter, but once it reaches a certain speed, it uses a separate propeller to fly forward like a traditional airplane, allowing the wings to provide the lift. This design is paired with a distinctive "triplane" layout—three layers of wings—and a six-arm structure to keep the massive frame stable.

These features allow the Matrix to serve a variety of roles. For the "low-altitude economy" being promoted by Chinese regulators, the startup is offering a pure electric model with a 250-kilometer range for regional hops, alongside a hybrid-electric version capable of traveling 1,500 kilometers. The latter version, equipped with a forward-opening door to fit standard air freight containers, targets a logistics sector still heavily reliant on carbon-intensive trucking.

However, the road to commercial flight remains a steep one. Despite the successful flight demonstration, AutoFlight faces the same formidable headwinds as its competitors, such as a complex global regulatory landscape and the rigorous demands of airworthiness certification. While the Matrix validates the company's high-power propulsion, moving from a test-center demonstration to a commercial fleet will require years of safety data.

Nevertheless, the debut of the Matrix signals a maturation of the startup’s ambitions. Having previously developed smaller models for autonomous logistics and urban mobility, AutoFlight is now betting that the future of electric flight isn't just in avoiding gridlock, but in hauling the weight of regional commerce. Whether the infrastructure and regulators are ready to accommodate a five-tonne electric disruptor remains the industry's unanswered question.

Keep Reading

Artificial Intelligence

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.