Why investors are backing Applied Brain Research’s on-device voice AI approach.
Updated
January 14, 2026 1:38 PM

Plastic model of a human's brain. PHOTO: UNSPLASH
Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.
The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.
ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.
ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.
Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.
This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.
That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.
For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.
With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.
Keep Reading
Where smarter storage meets smarter logistics.
Updated
January 8, 2026 6:32 PM
.jpg)
Kioxia's flagship building at Yokohama Technology Campus. PHOTO: KIOXIA
E-commerce keeps growing and with it, the number of products moving through warehouses every day. Items vary more than ever — different shapes, seasonal packaging, limited editions and constantly updated designs. At the same time, many logistics centers are dealing with labour shortages and rising pressure to automate.
But today’s image-recognition AI isn’t built for this level of change. Most systems rely on deep-learning models that need to be adjusted or retrained whenever new products appear. Every update — whether it’s a new item or a packaging change — adds extra time, energy use and operational cost. And for warehouses handling huge product catalogs, these retraining cycles can slow everything down.
KIOXIA, a company known for its memory and storage technologies, is working on a different approach. In a new collaboration with Tsubakimoto Chain and EAGLYS, the team has developed an AI-based image recognition system that is designed to adapt more easily as product lines grow and shift. The idea is to help logistics sites automatically identify items moving through their workflows without constantly reworking the core AI model.
At the center of the system is KIOXIA’s AiSAQ software paired with its Memory-Centric AI technology. Instead of retraining the model each time new products appear, the system stores new product data — images, labels and feature information — directly in high-capacity storage. This allows warehouses to add new items quickly without altering the original AI model.
Because storing more data can lead to longer search times, the system also indexes the stored product information and transfers the index into SSD storage. This makes it easier for the AI to retrieve relevant features fast, using a Retrieval-Augmented Generation–style method adapted for image recognition.
The collaboration will be showcased at the 2025 International Robot Exhibition in Tokyo. Visitors will see the system classify items in real time as they move along a conveyor, drawing on stored product features to identify them instantly. The demonstration aims to illustrate how logistics sites can handle continuously changing inventories with greater accuracy and reduced friction.
Overall, as logistics networks become increasingly busy and product lines evolve faster than ever, this memory-driven approach provides a practical way to keep automation adaptable and less fragile.