We bring you concise, up-to-the-minute coverage of the founders, funding rounds, and technologies shaping tomorrow. Expect clear explains, deal roundups, and stories that cut through the noise—so you can spot the next big move in tech, fast.
Inside the funding round driving the shift to intelligent construction fleets
Bedrock Robotics has raised US$270 million in Series B funding as it works to integrate greater automation into the construction industry. The round, co-led by CapitalG and the Valor Atreides AI Fund, values the San Francisco-based company at US$1.75 billion, bringing its total funding to more than US$350 million.
The size of the investment reflects growing interest in technologies that can change how large infrastructure and industrial projects are built. Bedrock is not trying to reinvent construction from scratch. Instead, it is focused on upgrading the machines contractors already use—so they can work more efficiently, safely and consistently.
Founded in 2024 by former Waymo engineers, Bedrock develops systems that allow heavy equipment to operate with increasing levels of autonomy. Its software and hardware can be retrofitted onto machines such as excavators, bulldozers and loaders. Rather than relying on one-off robotic tools, the company is building a connected platform that lets fleets of machines understand their surroundings and coordinate with one another on job sites.
This is what Bedrock calls “system-level autonomy”. Its technology combines cameras, lidar and AI models to help machines perceive terrain, detect obstacles, track work progress and carry out tasks like digging and grading with precision. Human supervisors remain in control, monitoring operations and stepping in when needed. Over time, Bedrock aims to reduce the amount of direct intervention those machines require.
The funding comes as contractors face rising pressure to deliver projects faster and with fewer available workers. In the press release, Bedrock notes that the industry needs nearly 800,000 additional workers over the next two years and that project backlogs have grown to more than eight months. These constraints are pushing firms to explore new ways to keep sites productive without compromising safety or quality.
Bedrock states that autonomy can help address those challenges. Not by removing people from the equation—but by allowing crews to supervise more equipment at once and reduce idle time. If machines can operate longer, with better awareness of their environment, sites can run more smoothly and with fewer disruptions.
The company has already started deploying its system in large-scale excavation work, including manufacturing and infrastructure projects. Contractors are using Bedrock’s platform to test how autonomous equipment can support real-world operations at scale, particularly in earthmoving tasks that demand precision and consistency.
From a business standpoint, the Series B funding will allow Bedrock to expand both its technology and its customer deployments. The company has also strengthened its leadership team with senior hires from Meta and Waymo, deepening its focus on AI evaluation, safety and operational growth. Bedrock says it is targeting its first fully operator-less excavator deployments with customers in 2026—a milestone for autonomy in complex construction equipment.
In that context, this round is not just about capital. It is about giving Bedrock the runway to prove that autonomous systems can move from controlled pilots into everyday use on job sites. The company bets that the future of construction will be shaped less by individual machines—and more by coordinated, intelligent systems that work alongside human crews.
From plush figures to digital pets, a new class of AI toys is emerging — built not around screens or sensors, but around memory, language and emotional awareness
Spielwarenmesse in Nuremberg is the global meeting point for the toy industry, where brands and designers preview what will shape how children play and learn next. At this year’s fair, one message stood out clearly: toys are no longer built just to entertain, but to listen, respond and grow with children. Tuya Smart, a global AI cloud platform company, used the event to show how AI-powered toys are turning familiar formats into interactive companions that can talk, react emotionally and adapt over time.
The company’s central argument was simple but far-reaching. The next generation of artificial intelligence toys will not be defined by motors, sensors or screens alone, but by how well they understand human behavior. Instead of being single-function objects, smart toys for children are becoming systems that combine language models, emotion recognition and memory to support ongoing interaction.
One of the most talked-about examples was Tuya Smart’s Nebula Plush AI Toy. At first glance, it looks like a soft, expressive plush figure. Inside, it uses emotional recognition to change its LED facial expressions in real time. If a child sounds sad or excited, the toy’s eyes respond visually. It supports natural conversation, reacts to hugs and touch and combines storytelling, news-style updates and interactive games. Over time, it builds memory, allowing it to behave less like a gadget and more like an interactive AI toy that recalls past interactions.
Another example was Walulu, also developed using Tuya’s AI toy platform. Walulu is an AI pet built around personalization. It can detect up to 19 emotional states and speak more than 60 languages. It connects to major large language models such as ChatGPT, Gemini, DeepSeek, Qwen and Doubao. Through simple app-based controls, users choose traits like cheerful, quiet, curious or thoughtful. Those choices shape how Walulu talks and reacts. Instead of repeating scripts, it adjusts its tone and behavior over time. The result is not a novelty item, but an emotionally responsive AI toy that feels consistent in daily use.
Tuya also showed how educational AI toys can extend into learning and exploration. Its AI Learning Camera blends computer vision with interactive content. When it recognizes an object, it links it to cultural and learning material. If a child points it at a foreign word, it offers real-time pronunciation and translation. It can also turn drawings into digital artwork, encouraging active creativity rather than passive screen time. In this sense, AI toys for kids are becoming tools for learning as much as play.
These products point to a larger strategy. Tuya is not just making toys — it is building the AI toy development platform behind them. Through its AI Toy Solution, developers can design a toy’s personality, memory logic and behavior without training models from scratch. The system integrates with leading AI models and supports multi-turn conversation and emotional feedback, turning standard hardware into responsive AI companions.
The platform supports multiple development paths. Brands can use ready-to-market OEM solutions, add AI to existing products or build custom toys around their own characters. Plush toys, robots, educational tools and wearables can all become AI-powered toys without changing their physical design.
Because these products are made for children and families, safety is built in. Tuya’s system includes parental controls, conversation history review and content management. It supports standards such as GDPR and CCPA with encryption and data localization.
From a business standpoint, Tuya’s pitch is speed and scale. The company says its AI toy infrastructure can cut development time by more than half and reduce R&D costs by up to 50 percent. Its AIoT network spans over 200 countries and supports more than 60 languages, making global deployment of AI toys easier.
What emerged at Spielwarenmesse 2026 was not just a lineup of smart gadgets, but a clear shift in the category. AI toys are evolving into emotionally aware systems that talk, listen, remember and adapt. Their value lies not in sounding clever, but in fitting naturally into everyday life.
The fair did not present AI toys as a distant future. It showed them as products already entering the mainstream. The real question now is not whether toys will use AI, but how carefully that intelligence is designed for children.
A closer look at PMMI’s FastTrack initiative and why it matters for growing manufacturing firms
Large trade shows are built for scale. But for small and medium-sized manufacturers, that scale often creates distance between what’s on display and what they can actually use. Too many options, too little time, and very few tools designed for companies that are still growing. That mismatch is what PMMI is trying to correct with its new SMB FastTrack Program at PACK EXPO East 2026.
That is the problem PMMI is trying to address with its new SMB FastTrack Program, launching at PACK EXPO East 2026 in Philadelphia.
PMMI — the Association for Packaging and Processing Technologies — is the industry body behind the PACK EXPO trade shows and a central organization in the global packaging and processing sector. Through FastTrack, it has created a program (not an app or a product) designed to help small and mid-sized companies navigate the show more efficiently and connect with solutions that fit their scale.
The idea behind SMB FastTrack is simple: reduce friction. Instead of asking smaller firms to sort through hundreds of exhibitors and sessions on their own, the program curates what is most relevant to them. Exhibitors that offer flexible pricing, right-sized machinery, or SMB-focused services are clearly identified with visual icons in both the online directory and on the show floor. That way, a small manufacturer can quickly distinguish between enterprise-only vendors and partners that are realistically accessible.
The same logic carries into education. Rather than treating all attendees the same, PACK EXPO East 2026 will include a learning track specifically built around SMB realities. These sessions focus on issues that smaller teams actually face—how to hire and train workers, use AI without over-investing, improve food safety, cut operating costs, and adopt technology in stages. The goal is not inspiration, but applicability: content that reflects real constraints, not ideal scenarios.
Planning, too, is built into the structure of the program. Through a dedicated FastTrack landing page, participants can access curated supplier lists, recommended sessions, and planning tools that help organize their time before they ever step onto the show floor. Tools like category search and sustainability finders are meant to narrow choices quickly, turning a massive event into something manageable.
Seen together, these elements point to a broader intention. PMMI is not simply adding features—it is reshaping how smaller manufacturers experience a major industry event. Instead of competing for attention in a space built for scale, SMBs are given clearer paths to the people, tools, and knowledge that match where they actually are in their growth cycle.
What makes SMB FastTrack notable is not the technology behind it, but the intention behind it. PMMI is recognizing that progress for small and mid-sized manufacturers depends less on spectacle and more on fit—solutions that are accessible, affordable, and adaptable. The program is designed to help companies move with purpose, not pressure.
In an industry where visibility often follows size, SMB FastTrack represents a structural shift. It treats small and medium-sized manufacturers not as a subset of the audience, but as a distinct group with distinct needs. By doing so, PMMI is quietly redefining what a trade show can be: not just a marketplace of innovation, but a usable platform for companies still building their next stage of growth.
A turbine-inspired generator shows how overlooked industrial airflow could quietly become a new source of usable power
Compressed air is used across factories, data centers and industrial plants to move materials, cool systems and power tools. Once it has done that job, the air is usually released — and its remaining energy goes unused.
That everyday waste is what caught the attention of a research team at Chung-Ang University in South Korea. They are investigating how this overlooked airflow can be harnessed to generate electricity instead of disappearing into the background.
Most of the world’s power today comes from systems like turbines, which turn moving fluids into energy or solar cells, which convert sunlight into electricity. The Chung-Ang team has built a device that uses compressed air to generate electricity without relying on traditional blades or sunlight.
At the center of the work is a simple question: what happens when high-pressure air spins through a specially shaped device at very high speed? The answer lies in the air itself. The researchers found that tiny particles naturally present in the air carry an electric charge. When that air moves rapidly across certain surfaces, it can transfer charge without physical contact. This creates electricity through a process known as the “particulate static effect.”
To use that effect, the team designed a generator based on a Tesla turbine. Unlike conventional turbines with blades, a Tesla turbine uses smooth rotating disks and relies on the viscosity of air to create motion. Compressed air enters the device, spins the disks at high speed and triggers charge buildup on specially layered surfaces inside.
What makes this approach different is that the system does not depend on friction between parts rubbing together. Instead, the charge comes from particles in the air interacting with the surfaces as they move past. This reduces wear and allows the generator to operate at very high speeds. And those speeds translate into real output.
In lab tests, the device produced strong electrical power. The researchers also showed that this energy could be used in practical ways. It ran small electronic devices, helped pull moisture from the air and removed dust particles from its surroundings.
The problem this research is addressing is straightforward.
Compressed air is already everywhere in industry, but its leftover energy is usually ignored. This system is designed to capture part of that unused motion and convert it into electricity without adding complex equipment or major safety risks.
Earlier methods of harvesting static electricity from particles showed promise, but they came with dangers. Uncontrolled discharge could cause sparks or even ignition. By using a sealed, turbine-based structure, the Chung-Ang University team offers a safer and more stable way to apply the same physical effect.
As a result, the technology is still in the research stage, but its direction is easy to see. It points toward a future where energy is not only generated in power plants or stored in batteries, but also recovered from everyday industrial processes.
The quiet infrastructure shift powering the next generation of data centers
Modern data centers operate on a simple yet fundamental principle: computers require the ability to share data extremely quickly. As AI and cloud systems grow, servers are no longer confined to a single rack. They are spread across many racks, sometimes across entire rooms. When that happens, moving data quickly and cleanly becomes harder.
Montage Technology, a Shanghai-based semiconductor company, builds the chips and connection systems that help servers exchange data without delays. This week, the company announced a new Active Electrical Cable (AEC) solution based on PCIe 6.x and CXL 3.x — two important standards used to connect CPUs, GPUs, network cards and storage inside modern data centers.
In simple terms, Montage’s new AEC product helps different parts of a data center “talk” to each other faster and more reliably, even when those parts are physically far apart.
As data centers grow to support AI and cloud workloads, their architecture is changing. Instead of everything sitting inside one rack, systems now stretch across multiple racks and even multiple rows. This creates a new problem: the longer the distance between machines, the harder it is to keep data signals clean and fast.
This is where Active Electrical Cables come in. Unlike regular copper cables, AECs include small electronic components inside the cable itself. These components strengthen and clean up the data signal as it travels, so information can move farther without getting distorted or delayed.
Montage’s solution uses its own retimer chip based on PCIe 6.x and CXL 3.x. A “retimer” refreshes the data signal so it arrives accurately at the other end. This allows servers, GPUs, storage devices and network cards to stay tightly connected even across longer distances inside large data centers.
The company also uses high-density cable designs and built-in monitoring tools so operators can track performance and fix issues faster. That makes large data centers easier to deploy and maintain.
According to Montage, the solution has already passed interoperability tests with CPUs, xPUs, PCIe switches and network cards. It has also been jointly developed with cable manufacturers in China and validated at the system level.
What makes this development important is not just speed. It is about scale. AI models, cloud services and real-time applications demand massive amounts of data to move continuously between machines. If that movement slows down, everything else slows with it.
By improving how machines connect across racks, Montage’s AEC solution supports the kind of infrastructure that next-generation AI and cloud systems depend on.
Looking ahead, the company plans to expand its high-speed interconnect products further, including work on PCIe 7.0 and Ethernet retimer technologies.
Quietly, in the background of every AI system and cloud service, there is a network of cables and chips doing the hard work of moving data. Montage’s latest launch focuses on making that hidden layer faster, cleaner and ready for the scale that modern computing now demands.
Inside Mercuryo’s Visa Partnership
Mercuryo is a fintech startup that builds the infrastructure to enable money to move seamlessly between crypto and traditional banking systems. In simple terms, it works on the problem of turning digital assets into usable cash.
As more people hold crypto through wallets and exchanges, one practical issue keeps arising: how do you actually withdraw that money and use it in the real world? For many users, converting tokens into local currency is still slow, confusing or expensive. That gap between “owning” crypto and being able to spend it is where Mercuryo operates.
The company’s latest step forward is a partnership with Visa to improve what is known as “off-ramping” — the process of converting crypto into fiat currency like dollars or euros. Until now, this has often been slow, expensive and confusing for users. Mercuryo is using Visa Direct, Visa’s real-time payments system, to make that process faster and more direct.
With this integration, users can convert their digital tokens into local currency and send the money straight to a Visa debit or credit card. The transaction happens through systems that already power global card payments, which means the money can arrive in near real time instead of days later.
Technically, this connects two very different worlds. On one side is blockchain-based crypto, which moves value on decentralised networks. On the other side is the traditional payment system, which runs on banks, cards and regulated rails. Mercuryo’s platform sits between the two and handles the conversion and movement of funds.
Instead of users leaving their wallet or exchange to cash out, Mercuryo allows the conversion to happen inside the apps and platforms they already use. The user does not need to understand the plumbing behind it. They just see that crypto becomes spendable money on their card.
This matters because access is what makes any financial system usable. If people cannot easily move their money, they treat it as locked or risky. Faster off-ramps make digital assets more practical, not just speculative.
Mercuryo’s work is not about creating new tokens or trading tools. It is about building the pipes that let money move smoothly between Web3 and the traditional financial world. The Visa partnership strengthens those pipes by using a global, trusted payments network that already works at scale.
Visa also framed the partnership as a bridge between systems. Anastasia Serikova, Head of Visa Direct, Europe, said: "By leveraging Visa Direct's capabilities, Mercuryo is not only making converting to fiat faster, simpler and more accessible than ever—it's building bridges between the crypto space and the traditional financial system. This integration empowers users to seamlessly convert digital assets into fiat in near real time, creating a more connected and convenient payment experience".
Over time, this kind of infrastructure is what determines whether crypto remains niche or becomes part of everyday finance. Not through headlines, but through systems that quietly reduce friction.
Mercuryo’s direction is clear: make digital assets easier to use, easier to exit and easier to connect to the money systems people already rely on.
With Phia’s AI, the new luxury is knowing what’s worth buying
AI has transformed how we shop—predicting trends, powering virtual try-ons and streamlining fashion logistics. Yet some of the biggest pain points remain: endless scrolling, too many tabs and never knowing if you’ve overpaid. That’s the gap Phia aims to close.
Co-founded by Phoebe Gates, daughter of Bill Gates, and climate activist Sophia Kianni, Phia was born in a Stanford dorm room and launched in April 2025. The app, available on mobile and as a browser extension, compares prices across over 40,000 retailers and thrift platforms to show what an item really costs. Its hallmark feature, “Should I Buy This?”, instantly flags whether something is overpriced, fair or a genuine deal.
The mission is simple: make shopping smarter, fairer and more sustainable. In just five months, Phia has attracted more than 500,000 users, indexed billions of products and built over 5,000 brand partnerships. It also secured a US$8 million seed round led by Kleiner Perkins, joined by Hailey Bieber, Kris Jenner, Sara Blakely and Sheryl Sandberg—investors who bridge tech, retail and culture. “Phia is redefining how people make purchase decisions,” said Annie Case, partner at Kleiner Perkins.
Phia’s AI engine scans real-time data from more than 250 million products across its network, including Vestiaire Collective, StockX, eBay and Poshmark. Beyond comparing prices, the app helps users discover cheaper or more sustainable options by displaying pre-owned items next to new ones—helping users see the full spectrum of choices before they buy. It also evaluates how different brands perform over time, analysing how well their products hold resale value. This insight helps shoppers judge whether a purchase is likely to last in value or if opting for a second-hand version makes more sense. The result is a platform that naturally encourages circular shopping—keeping items in use longer through resale, repair or recycling—and resonates strongly with Gen Z and millennial values of sustainability and mindful spending.
By encouraging transparency and smarter choices, Phia signals a broader shift in consumer technology: one where AI doesn’t just automate decisions but empowers users to understand them. Instead of merely digitizing the act of shopping, Phia embodies data-driven accountability—using intelligent search to help consumers make informed and ethical choices in markets long clouded by complexity. Retail analysts believe this level of visibility could push brands to maintain accurate and competitive pricing. Skeptics, however, argue that Phia must evolve beyond comparison to create emotional connection and loyalty. Still, one fact stands out: algorithms are no longer just recommending what we buy—they’re rewriting how we decide.
With new funding powering GPU expansion and advanced personalization tools, Phia’s next step is to build a true AI shopping agent—one that helps people buy better, live smarter and rethink what it means to shop with purpose.
Where Hollywood magic meets AI intelligence — Hong Kong becomes the new stage for virtual humans
In an era where pixels and intelligence converge, few companies bridge art and science as seamlessly as Digital Domain. Founded three decades ago by visionary filmmaker James Cameron, the company built its name through cinematic wizardry—bringing to life the impossible worlds of Titanic, The Curious Case of Benjamin Button and the Marvel universe. But today, its focus has evolved far beyond Hollywood: Digital Domain is reimagining the future of AI-driven virtual humans—and it’s doing so from right here in Hong Kong.
.jpg)
“AI and visual technology are merging faster than anyone imagined,” says William Wong, Chairman and CEO of Digital Domain. “For us, the question is not whether AI will reshape entertainment—it already has. The question is how we can extend that power into everyday life.”
Though globally recognized for its work on blockbuster films and AAA games, Digital Domain’s story is also deeply connected to Asia. A Hong Kong–listed company, it operates a network of production and research centers across North America, China and India. In 2024, it announced a major milestone—setting up a new R&D hub at Hong Kong Science Park focused on advancing artificial intelligence and virtual human technologies. “Our roots are in visual storytelling, but AI is unlocking a new frontier,” Wong says. “Hong Kong has been very proactive in promoting innovation and research, and with the right partnerships, we see real potential to make this a global R&D base.”
Building on that commitment, the company plans to invest about HK$200 million over five years, assembling a team of more than 40 professional talents specializing in computer vision, machine learning and digital production. For now, the team is still growing and has room to expand. “Talent is everything,” says Wong. “We want to grow local expertise while bringing in global experience to accelerate the learning curve.”


Digital Domain’s latest chapter revolves around one of AI’s most fascinating frontiers: the creation of virtual humans.
These are hyperrealistic, AI-powered virtual humans capable of speaking, moving and responding in real time. Using the advanced motion-capture and rendering techniques that transformed Hollywood visual effects, the company now builds digital personalities that appear on screens and in physical environments—serving in media, education, retail and even public services.
One of its most visible projects is “Aida”, the AI-powered presenter who delivers nightly weather reports on the Radio Television Hong Kong (RTHK). Another initiative, now in testing, will soon feature AI-powered concierges greeting travelers at airports, able to communicate in multiple languages and provide real-time personalized services. Similar collaborations are under way in healthcare, customer service and education.
“What’s exciting,” says Wong, “is that our technologies amplify human capability, helping to deliver better experiences, greater efficiency and higher capacity. AI-powered virtual humans can interact naturally, emotionally and in any language. They can help scale creativity and service, not replace it.”
To make that possible, Digital Domain has designed its system for compatibility and flexibility. It can connect to major AI models—from OpenAI and Google to Baidu—and operate across cloud platforms like AWS, Alibaba Cloud and Microsoft Azure. “It’s about openness,” says Wong. “Our clients can choose the AI brain that best fits their business.”
Establishing a permanent R&D base in Hong Kong marks a turning point for the company—and, in a broader sense, for the city’s technology ecosystem. With the support of the Office for Attracting Strategic Enterprises (OASES) in Hong Kong, Digital Domain hopes to make the city a creative hub where AI meets visual arts. “Hong Kong is the perfect meeting point,” Wong says. “It combines international exposure with a growing innovation ecosystem. We want to make it a hub for creative AI.”
As part of this effort, the company is also collaborating with universities such as the University of Hong Kong, City University of Hong Kong and Hong Kong Baptist University to co-develop new AI solutions and nurture the next generation of engineers. “The goal,” Wong notes, “is not just R&D for the sake of research—but R&D that translates into real-world impact.”

The collaboration with OASES underscores how both the company and the city share a vision for innovation-led growth. As Peter Yan King-shun, Director-General of OASES, notes, the initiative reflects Hong Kong’s growing strength as a global innovation and technology hub. “OASES was set up to attract high-potential enterprises from around the world across key sectors such as AI, data science, and cultural and creative technology,” he says. “Digital Domain’s new R&D center is a strong example of how Hong Kong can combine world-class talent, technology and creativity to drive innovation and global competitiveness.”
Digital Domain’s story mirrors the evolution of Hong Kong’s own innovation landscape—where creativity, technology and global ambition converge. From the big screen to the next generation of intelligent avatars, the company continues to prove that imagination is not bound by borders, but powered by the courage to reinvent what’s possible.
A look at how motivation, not metrics, is becoming the real frontier in fitness tech
Most running apps focus on measurement. Distance, pace, heart rate, badges. They record activity well, but struggle to help users maintain consistency over time. As a result, many people track diligently at first, then gradually disengage.
That drop-off has pushed developers to rethink what fitness technology is actually for. Instead of just documenting activity, some platforms are now trying to influence behaviour itself. Paceful, an AI-powered running platform developed by SportsTech startup xCREW, is part of that shift — not by adding more metrics, but by focusing on how people stay consistent. The platform is built on a simple behavioural insight: most people don’t stop exercising because they don’t care about health. They stop because routines are fragile. Miss a few days and the habit collapses. Technology that focuses only on performance metrics doesn’t solve that. Systems that reinforce consistency, belonging and feedback loops might.
Instead of treating running as a solo, data-driven task, Paceful is built around two ideas: behavioural incentives and social alignment. The system turns real-world running activity into tangible rewards and it uses AI to connect runners to people, clubs and challenges that fit how and where they actually run.
At the technical level, Paceful connects with existing fitness ecosystems. Users can import workout data from platforms like Apple Health and Strava rather than starting from scratch. Once inside the system, AI models analyse pace, frequency, location and participation patterns. That data is used to recommend running partners, clubs and group challenges that match each runner’s habits and context.
What makes this approach different is not the tracking itself, but what the platform does with the data it collects. Running distance and consistency become inputs for a reward system that offers physical-world incentives, such as gear, race entries or gift cards. The idea is to link effort to something concrete, rather than abstract. The company also built the system around community logic rather than individual competition. Even solo runners are placed into challenge formats designed to simulate the motivation of a group. In practice, that means users feel part of a shared structure even when running alone.
During a six-month beta phase in the US, xCREW tested Paceful with more than 4,000 running clubs and around 50,000 runners. According to the company, users increased their running frequency significantly and weekly retention remained unusually high for a fitness platform. One beta tester summed it up this way: “Strava just logs records, but Paceful rewards you for every run, which is a completely different motivation”.
The company has raised seed funding and plans to expand the platform beyond running, walking, trekking, cycling and swimming. Instead of asking how accurately technology can measure the body, platforms like Paceful are asking a different question: how technology might influence everyday behaviour. Not by adding more data, but by shaping the conditions around effort, feedback and social connection.
As AI becomes more common in consumer products, its real impact may depend less on how advanced the models are and more on what they are applied to. In this case, the focus isn’t speed or performance — it’s consistency. And whether systems like this can meaningfully support it over time.
A closer look at how reading, conversation, and AI are being combined
In the past, “educational toys” usually meant flashcards, prerecorded stories or apps that asked children to tap a screen. ChooChoo takes a different approach. It is designed not to instruct children at them, but to talk with them.
ChooChoo is an AI-powered interactive reading companion built for children aged three to six. Instead of playing stories passively, it engages kids in conversation while reading. It asks questions, reacts to answers, introduces new words in context and adjusts the story flow based on how the child responds. The goal is not entertainment alone, but language development through dialogue.
That idea is rooted in research, not novelty. ChooChoo is inspired by dialogic reading methods from Yale’s early childhood language development work, which show that children learn language faster when stories become two-way conversations rather than one-way narration. Used consistently, this approach has been shown to improve vocabulary, comprehension and confidence within weeks.
The project was created by Dr. Diana Zhu, who holds a PhD from Yale and focused her work on how children acquire language. Her aim with ChooChoo was to turn academic insight into something practical and warm enough to live in a child’s room. The result is a device that listens, responds and adapts instead of simply playing content on command.
What makes this possible is not just AI, but where that AI runs.
Unlike many smart toys that rely heavily on the cloud, ChooChoo is built on RiseLink’s edge AI platform. That means much of the intelligence happens directly on the device itself rather than being sent back and forth to remote servers. This design choice has three major implications.
First, it reduces delay. Conversations feel natural because the toy can respond almost instantly. Second, it lowers power consumption, allowing the device to stay “always on” without draining the battery quickly. Third, it improves privacy. Sensitive interactions are processed locally instead of being continuously streamed online.
RiseLink’s hardware, including its ultra-low-power AI system-on-chip designs, is already used at large scale in consumer electronics. The company ships hundreds of millions of connected chips every year and works with global brands like LG, Samsung, Midea and Hisense. In ChooChoo’s case, that same industrial-grade reliability is being applied to a child’s learning environment.
The result is a toy that behaves less like a gadget and more like a conversational partner. It engages children in back-and-forth discussion during stories, introduces new vocabulary in natural context, pays attention to comprehension and emotional language and adjusts its pace and tone based on each child’s interests and progress. Parents can also view progress through an optional app that shows what words their child has learned and how the system is adjusting over time.
What matters here is not that ChooChoo is “smart,” but that it reflects a shift in how technology enters early education. Instead of replacing teachers or parents, tools like this are designed to support human interaction by modeling it. The emphasis is on listening, responding and encouraging curiosity rather than testing or drilling.
That same philosophy is starting to shape the future of companion robots more broadly. As edge AI improves and hardware becomes smaller and more energy efficient, we are likely to see more devices that live alongside people instead of in front of them. Not just toys, but helpers, tutors and assistants that operate quietly in the background, responding when needed and staying out of the way when not.
In that sense, ChooChoo is less about novelty and more about direction. It shows what happens when AI is designed not for spectacle, but for presence. Not for control, but for conversation.
If companion robots become part of daily life in the coming years, their success may depend less on how powerful they are and more on how well they understand when to speak, when to listen and how to grow with the people who use them.