Health & Biotech

Healthcare Innovation: A New Simulator for Faster Endometriosis Diagnosis

Endometriosis often takes years to diagnose. This ultrasound simulation innovation could help change that

Updated

March 17, 2026 1:01 AM

A group of women facing backwards. PHOTO: UNSPLASH

Endometriosis affects roughly one in ten women worldwide, yet diagnosing the condition often takes years. In many cases, patients experience symptoms for nearly a decade before receiving a confirmed diagnosis. One reason is that detecting endometriosis through ultrasound requires specialized training and clinicians do not always encounter enough real cases to build that expertise.

To address this gap, medical simulation company Surgical Science has introduced a new ultrasound training module designed specifically for identifying endometriosis. The system allows clinicians to practice scanning techniques in a virtual environment, helping them recognize signs of the disease without relying solely on real-patient cases.

A key feature of the simulator is training on the “sliding sign,” an ultrasound indicator used to detect deep endometriosis. Because the condition can appear differently from patient to patient, mastering this assessment in real clinical settings can be difficult. The simulator allows clinicians to repeat the process across multiple scenarios, improving their ability to identify the condition during routine examinations.

The module also incorporates the International Deep Endometriosis Analysis (IDEA) protocol, which provides a structured method for performing a complete pelvic ultrasound assessment. Additional training cases, region-based scenarios and certification options are included to support standardized learning.

Early training results suggest strong improvements in clinician confidence, including higher skill levels in transvaginal ultrasound and better recognition of deep endometriosis. By expanding access to structured ultrasound training, simulation tools like this could help reduce diagnostic delays and improve care for millions of women living with the condition.

Keep Reading

Startup Profiles

Startup Applied Brain Research Raises Seed Funding to Develop On-Device Voice AI

Why investors are backing Applied Brain Research’s on-device voice AI approach.

Updated

January 28, 2026 5:53 PM

Plastic model of a human's brain. PHOTO: UNSPLASH

Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.

The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.

ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.

ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.

Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.

This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.

That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.

For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.

With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.