Artificial Intelligence

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.

Keep Reading

Artificial Intelligence

Can a Toy Teach a Child to Read Like a Human Would? Inside the Rise of AI Reading Companions

A closer look at how reading, conversation, and AI are being combined

Updated

February 7, 2026 2:18 PM

Assorted plush character toys piled inside a glass claw machine. PHOTO: ADOBE STOCK

In the past, “educational toys” usually meant flashcards, prerecorded stories or apps that asked children to tap a screen. ChooChoo takes a different approach. It is designed not to instruct children at them, but to talk with them.

ChooChoo is an AI-powered interactive reading companion built for children aged three to six. Instead of playing stories passively, it engages kids in conversation while reading. It asks questions, reacts to answers, introduces new words in context and adjusts the story flow based on how the child responds. The goal is not entertainment alone, but language development through dialogue.

That idea is rooted in research, not novelty. ChooChoo is inspired by dialogic reading methods from Yale’s early childhood language development work, which show that children learn language faster when stories become two-way conversations rather than one-way narration. Used consistently, this approach has been shown to improve vocabulary, comprehension and confidence within weeks.

The project was created by Dr. Diana Zhu, who holds a PhD from Yale and focused her work on how children acquire language. Her aim with ChooChoo was to turn academic insight into something practical and warm enough to live in a child’s room. The result is a device that listens, responds and adapts instead of simply playing content on command.

What makes this possible is not just AI, but where that AI runs.

Unlike many smart toys that rely heavily on the cloud, ChooChoo is built on RiseLink’s edge AI platform. That means much of the intelligence happens directly on the device itself rather than being sent back and forth to remote servers. This design choice has three major implications.

First, it reduces delay. Conversations feel natural because the toy can respond almost instantly. Second, it lowers power consumption, allowing the device to stay “always on” without draining the battery quickly. Third, it improves privacy. Sensitive interactions are processed locally instead of being continuously streamed online.

RiseLink’s hardware, including its ultra-low-power AI system-on-chip designs, is already used at large scale in consumer electronics. The company ships hundreds of millions of connected chips every year and works with global brands like LG, Samsung, Midea and Hisense. In ChooChoo’s case, that same industrial-grade reliability is being applied to a child’s learning environment.

The result is a toy that behaves less like a gadget and more like a conversational partner. It engages children in back-and-forth discussion during stories, introduces new vocabulary in natural context, pays attention to comprehension and emotional language and adjusts its pace and tone based on each child’s interests and progress. Parents can also view progress through an optional app that shows what words their child has learned and how the system is adjusting over time.

What matters here is not that ChooChoo is “smart,” but that it reflects a shift in how technology enters early education. Instead of replacing teachers or parents, tools like this are designed to support human interaction by modeling it. The emphasis is on listening, responding and encouraging curiosity rather than testing or drilling.

That same philosophy is starting to shape the future of companion robots more broadly. As edge AI improves and hardware becomes smaller and more energy efficient, we are likely to see more devices that live alongside people instead of in front of them. Not just toys, but helpers, tutors and assistants that operate quietly in the background, responding when needed and staying out of the way when not.

In that sense, ChooChoo is less about novelty and more about direction. It shows what happens when AI is designed not for spectacle, but for presence. Not for control, but for conversation.

If companion robots become part of daily life in the coming years, their success may depend less on how powerful they are and more on how well they understand when to speak, when to listen and how to grow with the people who use them.