As airports grow more complex, the real innovation lies in making their systems simpler, faster, and easier to act on
Updated
March 24, 2026 5:55 PM

An airplane parked at Josep Tarradellas Barcelona-El Prat Airport. PHOTO: UNSPLASH
Airports are some of the most complex systems in the world. Every day, they manage thousands of flights, passengers, crew schedules, gates and ground operations—all moving at the same time. But much of this still runs on older software that doesn’t connect well, making simple decisions harder than they need to be.
This is the gap companies like AirportLabs are trying to address. Instead of relying on multiple disconnected systems, their approach brings airport operations into one cloud-based platform. The goal is straightforward: take scattered data and turn it into something teams can actually use in real time.
In practice, this means combining core systems like flight databases, resource management and display systems into a single interface. When everything is connected, airport staff can respond faster—whether it’s adjusting gate assignments, managing delays, or coordinating ground crews. Rather than reacting late, decisions can be made as situations unfold.
Another shift is how this technology is built. Traditional airport systems often require heavy on-site infrastructure and long deployment timelines. In contrast, cloud-based platforms remove much of that complexity. Updates are faster, systems are easier to scale and teams spend less time maintaining servers and more time improving operations.
What stands out is the speed of adoption. Instead of multi-year rollouts, newer systems can be implemented in weeks, allowing airports to see improvements much sooner.
At a broader level, this reflects a familiar pattern seen across industries. As operations become more data-heavy, the advantage shifts to those who can simplify complexity. In aviation, that doesn’t just mean better technology—it means making the entire system easier to run.
Keep Reading
Why investors are backing Applied Brain Research’s on-device voice AI approach.
Updated
January 28, 2026 5:53 PM

Plastic model of a human's brain. PHOTO: UNSPLASH
Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.
The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.
ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.
ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.
Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.
This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.
That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.
For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.
With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.