From information gaps to global access — how AI is reshaping the pursuit of knowledge.
Updated
November 28, 2025 4:18 PM
.jpg)
Paper cut-outs of robots sitting on a pile of books. PHOTO: FREEPIK
Encyclopaedias have always been mirrors of their time — from heavy leather-bound volumes in the 19th century to Wikipedia’s community-edited pages online. But as the world’s information multiplies faster than humans can catalogue it, even open platforms struggle to keep pace. Enter Botipedia, a new project from INSEAD, The Business School for the World, that reimagines how knowledge can be created, verified and shared using artificial intelligence.
At its core, Botipedia is powered by proprietary AI that automates the process of writing encyclopaedia entries. Instead of relying on volunteers or editors, it uses a system called Dynamic Multi-method Generation (DMG) — a method that combines hundreds of algorithms and curated datasets to produce high-quality, verifiable content. This AI doesn’t just summarise what already exists; it synthesises information from archives, satellite feeds and data libraries to generate original text grounded in facts.
What makes this innovation significant is the gap it fills in global access to knowledge. While Wikipedia hosts roughly 64 million English-language entries, languages like Swahili have fewer than 40,000 articles — leaving most of the world’s population outside the circle of easily available online information. Botipedia aims to close that gap by generating over 400 billion entries across 100 languages, ensuring that no subject, event or region is overlooked.
"We are creating Botipedia to provide everyone with equal access to information, with no language left behind", says Phil Parker, INSEAD Chaired Professor of Management Science, creator of Botipedia and holder of one of the pioneering patents in the field of generative AI. "We focus on content grounded in data and sources with full provenance, allowing the user to see as many perspectives as possible, as opposed to one potentially biased source".
Unlike many generative AI tools that depend on large language models (LLMs), Botipedia adapts its methods based on the type of content. For instance, weather data is generated using geo-spatial techniques to cover every possible coordinate on Earth. This targeted, multi-method approach helps boost both the accuracy and reliability of what it produces — key challenges in today’s AI-driven content landscape.
Additionally, the innovation is also energy-efficient. Its DMG system operates at a fraction of the processing power required by GPU-heavy models like ChatGPT, making it a sustainable alternative for large-scale content generation.
By combining AI precision, linguistic inclusivity and academic credibility, Botipedia positions itself as more than a digital library — it’s a step toward universal, unbiased access to verified knowledge.
"Botipedia is one of many initiatives of the Human and Machine Intelligence Institute (HUMII) that we are establishing at INSEAD", says Lily Fang, Dean of Research and Innovation at INSEAD. "It is a practical application that builds on INSEAD-linked IP to help people make better decisions with knowledge powered by technology. We want technologies that enhance the quality and meaning of our work and life, to retain human agency and value in the age of intelligence".
By harnessing AI to bridge gaps of language, geography and credibility, Botipedia points to a future where access to knowledge is no longer a privilege, but a shared global resource.
Keep Reading
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
Updated
December 16, 2025 3:43 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.