Structured AI interviews and human judgment combine to address the global talent shortage
Updated
March 4, 2026 4:46 PM

ManpowerGroup World Headquarters in Milwaukee. PHOTO: ADOBE STOCK
As hiring pressures mount across global markets, ManpowerGroup is turning to technology to strengthen how it connects people to work. The workforce solutions major has announced a global partnership with Hubert, a startup focused on AI-driven structured interviews. The aim is simple: make hiring faster and fairer, without removing the human touch.
ManpowerGroup has spent decades operating at the center of the global labor market. The company works with employers across industries to fill roles, manage workforce planning and build talent pipelines. With millions of placements each year, it has a clear view of how strained hiring has become. A large share of employers today report difficulty finding skilled talent. At the same time, candidates expect more transparency, quicker feedback and flexibility in how they engage with employers.
Hubert enters this picture as a specialist in structured digital interviewing. The startup has built tools that allow candidates to complete interviews online, at any time, while being assessed against consistent criteria. Instead of relying on informal screening calls or resume filters, its system focuses on standardized questions tied directly to job requirements. The idea is to bring more consistency to early-stage hiring.
The partnership brings these capabilities into ManpowerGroup’s global operations. AI-powered interviews will now support the first stage of screening, helping recruiters identify qualified candidates earlier in the process. This does not replace recruiters. Final decisions and contextual judgment remain with experienced hiring professionals. What changes is the speed and structure of the initial assessment.
For employers, this could mean earlier visibility into job-ready talent and less time spent on manual screening. For candidates, it offers more flexibility. A significant portion of interviews on Hubert’s platform are completed outside regular office hours, allowing applicants to engage when it suits them. That flexibility can make a difference in competitive labor markets where timing matters.
The collaboration is also positioned as a step toward reducing bias. By evaluating each candidate against the same transparent standards, the process becomes more consistent. While no system can remove bias entirely, structured assessments can reduce the variability that often comes with unstructured interviews.
At its core, the partnership addresses a gap many large organizations are facing. They need scale and speed, but they cannot afford to lose the human judgment that good hiring depends on. Manual processes are too slow. Fully automated systems can feel impersonal and risky. ManpowerGroup’s approach suggests a middle path, where technology handles repetition and structure and recruiters focus on potential and fit.
The move also reflects a broader shift in the workforce industry. AI is no longer being tested on the sidelines. It is being built into the foundation of hiring operations. For established players like ManpowerGroup, the challenge is not whether to adopt AI, but how to do so responsibly and at scale.
By working with Hubert, the company is signaling that the future of recruitment will likely blend structured digital tools with human expertise. In a market defined by talent shortages and rising expectations, that balance may prove critical.
Keep Reading
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
Updated
January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.