Climate & Energy

Turning Wasted Heat Into Real-World Value: How Canaan Is Rethinking Energy Use in Computing

Turning computing heat into a practical heating solution for greenhouses.

Updated

January 8, 2026 6:27 PM

Inside of a workstation computer with red lighting. PHOTO: UNSPLASH

Most computing systems have one unavoidable side effect: they get hot. That heat is usually treated as a problem and pushed away using cooling systems. Canaan Inc., a technology company that builds high-performance computing machines, is now showing how that same heat can be reused instead of wasted.

In a pilot project in Manitoba, Canada, Canaan is working with greenhouse operator Bitforest Investment to recover heat generated by its computing systems. Rather than focusing only on computing output, the project looks at a more basic question—what happens to all the heat these machines produce and can it serve a practical purpose?

The idea is simple. Canaan’s computers run continuously and naturally generate heat. Instead of releasing that heat into the environment, the system captures it and uses it to warm water. That warm water is then fed into the greenhouse’s existing heating system. As a result, the greenhouse needs less additional energy to maintain the temperatures required for plant growth.

This is enabled through liquid cooling. Instead of using air to cool the machines, a liquid circulates through the system and absorbs heat more efficiently. Because liquid retains heat better than air, the recovered water reaches temperatures that are suitable for industrial use. In effect, the computing system supports greenhouse heating while continuing to perform its primary computing function.

What makes this approach workable is that it integrates with existing infrastructure. The recovered heat does not replace the greenhouse’s boilers but supplements them. By preheating the water that enters the boiler system, the overall energy demand is reduced. Based on current assumptions, Canaan estimates that a significant portion of the electricity used by the servers can be recovered as usable heat, though actual results will be confirmed once the system is fully operational.

This matters because heating is one of the largest energy expenses for commercial greenhouses, particularly in colder regions like Canada. Many facilities still rely heavily on fossil-fuel-based heating and policies such as carbon pricing are encouraging lower-emission alternatives. Reusing computing heat offers a way to improve efficiency without requiring a complete overhaul of existing systems.

The project is planned to run for an initial two-year period, allowing Canaan to evaluate real-world performance factors such as reliability, system stability and maintenance needs. These findings will help determine whether the model can be replicated in other agricultural or industrial settings.

More broadly, the initiative reflects a shift in how computing infrastructure can be designed. Instead of operating as energy-intensive systems isolated from everyday use, computing equipment can contribute to real-world applications. Canaan’s greenhouse pilot highlights how excess heat—often seen as a by-product—can become part of a more efficient and thoughtful energy loop.

In doing so, the project suggests that improving sustainability in technology is not only about reducing energy consumption, but also about finding smarter ways to reuse the energy already being generated.

Keep Reading

Artificial Intelligence

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.