We bring you concise, up-to-the-minute coverage of the founders, funding rounds, and technologies shaping tomorrow. Expect clear explains, deal roundups, and stories that cut through the noise—so you can spot the next big move in tech, fast.
Examining the shift from fast answers to verified intelligence in enterprise AI.
Neuron7.ai, a company that builds AI systems to help service teams resolve technical issues faster, has launched Neuro. It is a new kind of AI agent built for environments where accuracy matters more than speed. From manufacturing floors to hospital equipment rooms, Neuro is designed for situations where a wrong answer can halt operations.
What sets Neuro apart is its focus on reliability. Instead of relying solely on large language models that often produce confident but inaccurate responses, Neuro combines deterministic AI — which draws on verified, trusted data — with autonomous reasoning for more complex cases. This hybrid design helps the system provide context-aware resolutions without inventing answers or “hallucinating”, a common issue that has made many enterprises cautious about adopting agentic AI.
“Enterprise adoption of agentic AI has stalled despite massive vendor investment. Gartner predicts 40% of projects will be canceled by 2027 due to reliability concerns”, said Niken Patel, CEO and Co-Founder of Neuron7. “The root cause is hallucinations. In service operations, outcomes are binary. An issue is either resolved or it is not. Probabilistic AI that is right only 70% of the time fails 30% of your customers and that failure rate is unacceptable for mission-critical service”.
That concern shaped how Neuro was built. “We use deterministic guided fixes for known issues. No guessing, no hallucinations — and reserve autonomous AI reasoning for complex scenarios. What sets Neuro apart is knowing which mode to use. While competitors race to make agents more autonomous, we're focused on making service resolution more accurate and trusted”, Patel explained.
At the heart of Neuro is the Smart Resolution Hub, Neuron7’s central intelligence layer that consolidates service data, knowledge bases and troubleshooting workflows into one conversational experience. This means a technician can describe a problem — say, a diagnostic error in an MRI scanner — and Neuro can instantly generate a verified, step-by-step solution. If the problem hasn’t been encountered before, it can autonomously scan through thousands of internal and external data points to identify the most likely fix, all while maintaining traceability and compliance.
Neuro’s architecture also makes it practical for real-world use. It integrates seamlessly with enterprise systems such as Salesforce, Microsoft, ServiceNow and SAP, allowing companies to embed it within their existing support operations. Early users of Neuron7’s platform have reported measurable improvements — faster resolutions, higher customer satisfaction and reduced downtime — thanks to guided intelligence that scales expert-level problem solving across teams.
The timing of Neuro’s debut feels deliberate. As organizations look to move past the hype of generative AI, trust and accountability have become the new benchmarks. AI systems that can explain their reasoning and stay within verifiable boundaries are emerging as the next phase of enterprise adoption.
“The market has figured out how to build autonomous agents”, Patel said. “The unsolved problem is building accurate agents for contexts where errors have consequences. Neuro fills that gap”.
Neuron7 is building a system that knows its limits — one that reasons carefully, acts responsibly and earns trust where it matters most. In a space dominated by speculation, that discipline may well redefine what “intelligent” really means in enterprise AI.
From information gaps to global access — how AI is reshaping the pursuit of knowledge.
Encyclopaedias have always been mirrors of their time — from heavy leather-bound volumes in the 19th century to Wikipedia’s community-edited pages online. But as the world’s information multiplies faster than humans can catalogue it, even open platforms struggle to keep pace. Enter Botipedia, a new project from INSEAD, The Business School for the World, that reimagines how knowledge can be created, verified and shared using artificial intelligence.
At its core, Botipedia is powered by proprietary AI that automates the process of writing encyclopaedia entries. Instead of relying on volunteers or editors, it uses a system called Dynamic Multi-method Generation (DMG) — a method that combines hundreds of algorithms and curated datasets to produce high-quality, verifiable content. This AI doesn’t just summarise what already exists; it synthesises information from archives, satellite feeds and data libraries to generate original text grounded in facts.
What makes this innovation significant is the gap it fills in global access to knowledge. While Wikipedia hosts roughly 64 million English-language entries, languages like Swahili have fewer than 40,000 articles — leaving most of the world’s population outside the circle of easily available online information. Botipedia aims to close that gap by generating over 400 billion entries across 100 languages, ensuring that no subject, event or region is overlooked.
"We are creating Botipedia to provide everyone with equal access to information, with no language left behind", says Phil Parker, INSEAD Chaired Professor of Management Science, creator of Botipedia and holder of one of the pioneering patents in the field of generative AI. "We focus on content grounded in data and sources with full provenance, allowing the user to see as many perspectives as possible, as opposed to one potentially biased source".
Unlike many generative AI tools that depend on large language models (LLMs), Botipedia adapts its methods based on the type of content. For instance, weather data is generated using geo-spatial techniques to cover every possible coordinate on Earth. This targeted, multi-method approach helps boost both the accuracy and reliability of what it produces — key challenges in today’s AI-driven content landscape.
Additionally, the innovation is also energy-efficient. Its DMG system operates at a fraction of the processing power required by GPU-heavy models like ChatGPT, making it a sustainable alternative for large-scale content generation.
By combining AI precision, linguistic inclusivity and academic credibility, Botipedia positions itself as more than a digital library — it’s a step toward universal, unbiased access to verified knowledge.
"Botipedia is one of many initiatives of the Human and Machine Intelligence Institute (HUMII) that we are establishing at INSEAD", says Lily Fang, Dean of Research and Innovation at INSEAD. "It is a practical application that builds on INSEAD-linked IP to help people make better decisions with knowledge powered by technology. We want technologies that enhance the quality and meaning of our work and life, to retain human agency and value in the age of intelligence".
By harnessing AI to bridge gaps of language, geography and credibility, Botipedia points to a future where access to knowledge is no longer a privilege, but a shared global resource.
A closer look at how machine intelligence is helping doctors see cancer in an entirely new light.
Artificial intelligence is beginning to change how scientists understand cancer at the cellular level. In a new collaboration, Bio-Techne Corporation, a global life sciences tools provider, and Nucleai, an AI company specializing in spatial biology for precision medicine, have unveiled data from the SECOMBIT clinical trial that could reshape how doctors predict cancer treatment outcomes. The results, presented at the Society for Immunotherapy of Cancer (SITC) 2025 Annual Meeting, highlight how AI-powered analysis of tumor environments can reveal which patients are more likely to benefit from specific therapies.
Led in collaboration with Professor Paolo Ascierto of the University of Napoli Federico II and Istituto Nazionale Tumori IRCCS Fondazione Pascale, the study explores how spatial biology — the science of mapping where and how cells interact within tissue — can uncover subtle immune behaviors linked to survival in melanoma patients.
Using Bio-Techne’s COMET platform and a 28-plex multiplex immunofluorescence panel, researchers analyzed 42 pre-treatment biopsies from patients with metastatic melanoma, an advanced stage of skin cancer. Nucleai’s multimodal AI platform integrated these imaging results with pathology and clinical data to trace patterns of immune cell interactions inside tumors.
The findings revealed that therapy sequencing significantly influences immune activity and patient outcomes. Patients who received targeted therapy followed by immunotherapy showed stronger immune activation, marked by higher levels of PD-L1+ CD8 T-cells and ICOS+ CD4 T-cells. Those who began with immunotherapy benefited most when PD-1+ CD8 T-cells engaged closely with PD-L1+ CD4 T-cells along the tumor’s invasive edge. Meanwhile, in patients alternating between targeted and immune treatments, beneficial antigen-presenting cell (APC) and T-cell interactions appeared near tumor margins, whereas macrophage activity in the outer tumor environment pointed to poorer prognosis.
“This study exemplifies how our innovative spatial imaging and analysis workflow can be applied broadly to clinical research to ultimately transform clinical decision-making in immuno-oncology”, said Matt McManus, President of the Diagnostics and Spatial Biology Segment at Bio-Techne.
The collaboration between the two companies underscores how AI and high-plex imaging together can help decode complex biological systems. As Avi Veidman, CEO of Nucleai, explained, “Our multimodal spatial operating system enables integration of high-plex imaging, data and clinical information to identify predictive biomarkers in clinical settings. This collaboration shows how precision medicine products can become more accurate, explainable and differentiated when powered by high-plex spatial proteomics – not limited by low-plex or H&E data alone”.
Dr. Ascierto described the SECOMBIT trial as “a milestone in demonstrating the possible predictive power of spatial biomarkers in patients enrolled in a clinical study”.
The study’s broader message is clear: understanding where immune cells are and how they interact inside a tumor could become just as important as knowing what they are. As AI continues to map these microscopic landscapes, oncology may move closer to genuinely personalized treatment — one patient, and one immune network, at a time.
The upgraded CodeFusion Studio 2.0 simplifies how developers design, test and deploy AI on embedded systems.
Analog Devices (ADI), a global semiconductor company, launched CodeFusion Studio™ 2.0 on November 3, 2025. The new version of its open-source development platform is designed to make it easier and faster for developers to build AI-powered embedded systems that run on ADI’s processors and microcontrollers.
“The next era of embedded intelligence requires removing friction from AI development”, said Rob Oshana, Senior Vice President of the Software and Digital Platforms group at ADI. “CodeFusion Studio 2.0 transforms the developer experience by unifying fragmented AI workflows into a seamless process, empowering developers to leverage the full potential of ADI's cutting-edge products with ease so they can focus on innovating and accelerating time to market”.
The upgraded platform introduces new tools for hardware abstraction, AI integration and automation. These help developers move more easily from early design to deployment.
CodeFusion Studio 2.0 enables complete AI workflows, allowing teams to use their own models and deploy them on everything from low-power edge devices to advanced digital signal processors (DSPs).
Built on Microsoft Visual Studio Code, the new CodeFusion Studio offers built-in checks for model compatibility, along with performance testing and optimization tools that help reduce development time. Building on these capabilities, a new modular framework based on Zephyr OS lets developers test and monitor how AI and machine learning models perform in real time. This gives clearer insight into how each part of a model behaves during operation and helps fine-tune performance across different hardware setups.
Additionally, the CodeFusion Studio System Planner has also been redesigned to handle more device types and complex, multi-core applications. With new built-in diagnostic and debugging features — like integrated memory analysis and visual error tracking — developers can now troubleshoot problems faster and keep their systems running more efficiently.
This launch marks a deeper pivot for ADI. Long known for high-precision analog chips and converters, the company is expanding its edge-AI and software capabilities to enable what it calls Physical Intelligence — systems that can perceive, reason, and act locally.
“Companies that deliver physically aware AI solutions are poised to transform industries and create new, industry-leading opportunities. That's why we're creating an ecosystem that enables developers to optimize, deploy and evaluate AI models seamlessly on ADI hardware, even without physical access to a board”, said Paul Golding, Vice President of Edge AI and Robotics at ADI. “CodeFusion Studio 2.0 is just one step we're taking to deliver Physical Intelligence to our customers, ultimately enabling them to create systems that perceive, reason and act locally, all within the constraints of real-world physics”.
Robots that learn on the job: AgiBot tests reinforcement learning in real-world manufacturing.
Shanghai-based robotics firm AgiBot has taken a major step toward bringing artificial intelligence into real manufacturing. The company announced that its Real-World Reinforcement Learning (RW-RL) system has been successfully deployed on a pilot production line run in partnership with Longcheer Technology. It marks one of the first real applications of reinforcement learning in industrial robotics.
The project represents a key shift in factory automation. For years, precision manufacturing has relied on rigid setups: robots that need custom fixtures, intricate programming and long calibration cycles. Even newer systems combining vision and force control often struggle with slow deployment and complex maintenance. AgiBot’s system aims to change that by letting robots learn and adapt on the job, reducing the need for extensive tuning or manual reconfiguration.
The RW-RL setup allows a robot to pick up new tasks within minutes rather than weeks. Once trained, the system can automatically adjust to variations, such as changes in part placement or size tolerance, maintaining steady performance throughout long operations. When production lines switch models or products, only minor hardware tweaks are needed. This flexibility could significantly cut downtime and setup costs in industries where rapid product turnover is common.
The system’s main strengths lie in faster deployment, high adaptability and easier reconfiguration. In practice, robots can be retrained quickly for new tasks without needing new fixtures or tools — a long-standing obstacle in consumer electronics production. The platform also works reliably across different factory layouts, showing potential for broader use in complex or varied manufacturing environments.
Beyond its technical claims, the milestone demonstrates a deeper convergence between algorithmic intelligence and mechanical motion.Instead of being tested only in the lab, AgiBot’s system was tried in real factory settings, showing it can perform reliably outside research conditions.
This progress builds on years of reinforcement learning research, which has gradually pushed AI toward greater stability and real-world usability. AgiBot’s Chief Scientist Dr. Jianlan Luo and his team have been at the forefront of that effort, refining algorithms capable of reliable performance on physical machines. Their work now underpins a production-ready platform that blends adaptive learning with precision motion control — turning what was once a research goal into a working industrial solution.
Looking forward, the two companies plan to extend the approach to other manufacturing areas, including consumer electronics and automotive components. They also aim to develop modular robot systems that can integrate smoothly with existing production setups.
Reimagining biodefense at the intersection of AI, biology and urgency.
Valthos has raised US$30 million in seed funding, led by the OpenAI Startup Fund, Lux Capital and Founders Fund, to advance its mission of building next-generation biodefense systems.
The company’s work comes at a time when biotechnology is evolving at an unprecedented pace. Biotechnology is moving at record speed. These new tools can lead to life-changing medical discoveries, but they also bring the risk of dangerous biological agents being developed faster than ever.
“The issue at the core of biodefense is asymmetry”, said Kathleen McMahon, co-founder of Valthos. “It’s easier to make a pathogen than a cure. We’re building tools to help experts at the frontlines of biodefense move as fast as the threats they face”. The gap Valthos aims to close is between the rapid rise of biological threats and the slower pace of developing cures. Therefore, the company is developing AI systems that can rapidly analyze biological sequences and significantly shorten the time needed to design medical countermeasures.
“In this new world, the only way forward is to be faster. So we set out to build a new tech stack for biodefense”, said Tess van Stekelenburg, co-founder of Valthos. “This software infrastructure strengthens biodefense today and lays the groundwork for the adaptive, precision therapeutics of tomorrow”.
The company was founded by van Stekelenburg, a partner at Lux Capital and McMahon, the former head of Palantir’s Life Sciences division. Together, they’ve built a multidisciplinary team of experts from Palantir, DeepMind, Stanford’s Arc Institute and MIT’s Broad Institute, bringing together deep experience in software engineering, machine learning and biotechnology.
“Technology is moving fast. An industrial ecosystem of builders, companies and solutions further democratizes AI to provide broad resilience, and ensures the U.S. continues to lead as AI increasingly powers everything around us. As AI and biotech rapidly advance, biodefense is one of the new industry verticals that helps maximize the benefits and minimize the risks”, said Jason Kwon, OpenAI’s Chief Strategy Officer. “Valthos is pushing the frontier of protection and defense in one of the most strategic intersections of multiple world-changing technologies, and with the team to do it”.
Looking ahead, Valthos plans to expand its engineering team and scale its software infrastructure for both government and commercial partners — moving closer to its goal of enabling faster, smarter and more adaptive biodefense capabilities.
Cyberport Venture Capital Forum (CVCF) 2025 Returns Under the Theme "The Innovation–Venture Nexus: Igniting Transformative Success"
The two-day forum will once again bring together global and local leaders to explore how technology, capital and collaboration intersect to drive the next wave of growth. Entrepreneurs, investors and innovators will exchange insights on artificial intelligence, digital assets and Web 3.0—technologies that are reshaping industries and redefining both risk and opportunity.
As industries face challenges from geopolitical shifts, regulatory changes and market volatility, CVCF will serve as a platform to address a defining question: How can innovation remain bold and visionary in an ever-evolving funding landscape? Through keynotes, panel discussions and interactive sessions, the forum will spotlight the transformative potential of technologies like artificial intelligence (AI), Web 3.0 and digital assets while offering practical strategies to turn disruption into market advantage.
With investor matching, power pitches, start-up clinics and workshops, CVCF 2025 offers a front-row seat to emerging markets across Asia, the Middle East, the United States and Europe, connecting forward-thinking investors with visionary entrepreneurs. It is not just a conference—it’s a bridge between ideas and investment designed to ignite breakthroughs and foster growth in the global innovation ecosystem. It provides a unique platform for startups and investors to navigate the complexities of today’s economy while seizing new opportunities for collaboration and growth.
To preview the conversations ahead, three speakers share perspectives on trends shaping the future of innovation, investment and entrepreneurship, setting the stage for the discussions that will unfold at CVCF 2025.

Co-founder and CEO, AIFT
Session: Riding the Middle East Momentum — Capitalizing Unique Innovation and Investment Strengths
As the Middle East accelerates its shift from oil dependence toward digital diversification, the region is becoming a focal point for blockchain and AI investment. In his upcoming session, Alvin Kwock will explore the region’s innovation potential — and here, he shares some of his views on the opportunities shaping that transformation.
Alvin Kwock, co-founder and CEO of AIFT, oversees operations across three verticals: AI and cybersecurity (Vulcan and Cymetrics), blockchain (OneInfinity and OneSavie) and pet and B2C (OneDegree). With local operations spanning Asia and the Middle East, AIFT is expanding rapidly.
When asked about the Middle East’s rapid rise as a global innovation hub, Kwock said that the region is shifting from a petroleum-dependent economy to one increasingly diversified through technology and innovation, with markets advancing blockchain and AI technologies. AIFT is prioritizing expansion in the UAE and Saudi Arabia, where AI investment and regulatory openness create immense potential. Hong Kong’s expertise in financial risk management acts as a “confidence anchor” for international markets, allowing AIFT to deliver compliant solutions tailored for emerging markets while developing Sharia-compliant, regulation-aligned technologies.
“Hong Kong’s storied expertise in financial risk management acts as a ‘confidence anchor’ for international markets.”
He also noted that the region’s accelerating digital adoption opens unique opportunities for AI, insurtech and fintech. The UAE and Bahrain’ embrace of virtual assets, combined with Hong Kong’s proven frameworks, provide a foundation for localized solutions. By integrating risk oversight and regulatory best practices, AIFT supports stable market growth and delivers specialized insurance to enhance resilience in emerging markets.
On managing geopolitical risk, Kwock explained that AIFT mitigates exposure through local partnerships, regulatory alignment and cultural understanding. By hiring Arab employees and ensuring operations align with Islamic values, AIFT strengthens Hong Kong–Middle East collaboration. This approach, he said, offers a blueprint for startups: prioritize local engagement and flexibility to balance risk and growth.

Founder, Hash Global Advisory Company Ltd.
Session: From Hype to Holdings — Where Smart Money Goes in Digital Assets 2025–2027
With institutional frameworks for Web 3.0 maturing, investors are increasingly focused on sustainable value creation. In his session, Kang Shen will discuss how smart capital is moving beyond speculation toward real-world utility—themes echoed in his reflections shared ahead of the forum.
Kang Shen, founder of Hash Global Advisory, applies value-investing principles to the Web 3.0 sector. A graduate of Fudan University and the University of Chicago Booth School of Business and a Chartered Financial Analyst (CFA), Shen has more than 20 years of financial industry experience with roles at the Industrial Bank of Japan, PIMCO and Bosera Asset Management.
On the tokenization of real-world assets, Shen observed that the RWA sector remains in its early phase of regulatory and infrastructure development. Over the next two years, as compliance systems mature, scalable projects with tangible value will emerge. For now, his approach remains cautious, focusing on fundamentals rather than inflated market narratives.
He also shared his optimism for three areas with the most potential upside: Web 3.0 Culture and Entertainment—including projects like Meet48 and Offgrid; Web 3.0 E-Commerce and Payments—with ventures such as WSPN, RD Technologies and Bitgoods; and On-Chain Data and Data Assets—such as Chainbase and Data Dance Chain. These, he noted, represent meaningful real-world applications of Web 3.0 technologies.
“Web 3.0 is currently undergoing a process of value realignment.”
Shen emphasized that Hash Global has always been committed to applying value-investing principles to the field of digital asset management. As early as 2019, the firm proposed using a monetary equation framework to evaluate ecosystem tokens and recently defined a new class—“Value-Functional Tokens”. He believes Web 3.0 is now undergoing a process of value realignment, where genuine utility will determine long-term worth.

Founder and CEO, Zhejiang Linctex Digital Technology Co., Ltd. (Style3D)
Session: Strategic Exits — IPO Paths for Expanding Rapid-Growth Companies
The fashion and textile industry is undergoing rapid digital transformation. Against this backdrop, Eric Liu will join CVCF 2025 to discuss strategic growth and expansion paths for fast-scaling companies.
Eric Liu, founder and CEO of Zhejiang Linctex Digital Technology Co., Ltd. (Style3D), holds dual master’s degrees in applied computing and molecular biology from VUB University in Belgium and a PhD in Electronic Information Engineering from Zhejiang University. A serial entrepreneur in the textile industry, Liu founded Style3D to drive digital transformation through AI and 3D technology.
He explained that Style3D’s fusion of AI and 3D technology builds a full-chain digital ecosystem. AI-driven design tools powered by large language models shorten design cycles from weeks to hours, while 3D simulation reduces prototyping costs by 30 percent. The company’s self-developed simulation engine supports virtual fashion shows and sustainability initiatives by optimizing fabric usage.
“Style3D’s fusion of AI and 3D technology builds a full-chain digital ecosystem.”
On the company’s origins, Liu said that traditional fashion R&D cycles are slow and costly. By integrating AI for pattern generation and 3D for design-to-production links, Style3D overcomes these barriers. With over 200 core patents and an extensive database of 2.3 million fabric properties and 1.2 million garment templates, the company leads digital fashion innovation.
Looking ahead, Liu noted that Style3D reinvests 40 percent of annual revenue into R&D, develops AI-driven trend prediction tools and expands innovation hubs in Paris and Milan. By leading the standardization of “3D Digital Fashion Infrastructure”, Style3D is setting the industry benchmark for the next era of intelligent manufacturing.
As global innovators prepare to gather at CVCF 2025, the forum promises to ignite ideas, discoveries and partnerships that will shape the future of technology and investment. From cutting-edge insights to practical strategies, the conversations starting here are just the beginning of a journey to redefine what’s possible in the global innovation ecosystem.
.jpg)
At under US$1,000, Hypernova isn’t just eyewear—it’s Meta’s push to make AR feel ordinary.
Meta is preparing to launch its next big wearable: the Hypernova smart glasses. Unlike earlier experiments like the Ray-Ban Stories, these new glasses promise more advanced features at a price point under US$1,000. With a launch set for September 17 at Meta’s annual Connect conference, the Hypernova is already drawing attention for blending design, technology and accessibility.
In this article, let’s take a closer look at Hypernova’s design, features, pricing and the challenges Meta faces as it tries to bring smart glasses into everyday life.
Meta’s earlier Ray-Ban glasses offered cameras and audio but no display. Hypernova changes that: The glasses will ship with a built-in micro-display, giving wearers quick access to maps, messages, notifications and even Meta’s AI assistant. It’s a step toward everyday AR that feels useful and natural, not experimental.
Perhaps most importantly, the price makes them attainable. While early estimates placed the cost above US$1,000, Meta has committed to a launch price of around US$800. That’s still premium, but it moves AR smart glasses into reach for more consumers.
Hypernova weighs about 70 grams, roughly 20 grams heavier than the Ray-Ban Meta models. The added weight likely comes from added components like the new display and extra sensors.
To keep the glasses stylish, Meta continues its partnership with EssilorLuxottica, the company behind Ray-Ban and Prada eyewear. Thicker frames—especially Prada’s designs—help hide the hardware like chips, microphones and batteries without making the glasses look oversized.
The glasses stick close to the classic Ray-Ban silhouette but feature slightly bulkier arms. On the left side, a touch-sensitive bar lets users control functions with taps and swipes. For example, a two-finger tap can trigger a photo or start video recording.
Hypernova introduces something the earlier Ray-Ban glasses never had: a display built right into the lens. In the bottom-right corner of the right lens, a small micro-screen uses waveguide optics to project a digital overlay with about a 20° field of view. This means you can glance at turn-by-turn directions, check a notification or quickly consult Meta’s AI assistant without pulling out your phone. It’s discreet, practical and a major step up from the older models, which were limited to capturing photos and videos, handling calls and playing music via speakers.
Alongside the glasses comes the Ceres wristband, a companion device powered by electromyography (EMG). The band picks up the tiny electrical signals in your wrist and fingers, translating them into commands. A pinch might let you select something, a wrist flick could scroll a page, and a swipe could move between screens. The idea is to avoid clunky buttons or having to talk to your glasses in public. Meta has also been experimenting with handwriting recognition through the band, though it’s not clear if that feature will be ready in time for launch.
Meta doesn’t just want Hypernova to be useful—it wants it to be fun. Code found in leaked firmware revealed a small game called Hypertrail. It looks to borrow ideas from the 1981 arcade shooter Galaga, letting wearers play a simple, retro-inspired game right through their glasses. It’s not the main attraction, but it shows Meta is trying to make Hypernova feel more like a playful everyday gadget rather than just a piece of serious tech.
Hypernova runs on a customized version of Android and pairs with smartphones through the Meta View app. Out of the box, it should support the basics: calls, music and message notifications. Leaks suggest several apps will come preinstalled, including Camera, Gallery, Maps, WhatsApp, Messenger and Meta AI. A Qualcomm processor powers the whole setup, helping it run smoothly while keeping energy demands reasonable.
Meta is also trying to bring in outside developers. In August 2025, CNBC reported that the company invited third-party developers—especially in generative AI—to build experimental apps for Hypernova and the Ceres wristband. The Meta Connect 2025 agenda even highlights sessions on a new smart glasses SDK and toolkit. The push shows Meta’s interest in making Hypernova more than just a device; it wants a broader platform with apps that go beyond its own first-party software.
During development, Hypernova was rumored to cost as much as US$1,400. By pricing it around US$800, Meta signals that it wants adoption more than profit. The company is keeping production limited (around 150,000 units), showing it sees this as a market test rather than a mass rollout. Still, the sub-US$1,000 price tag makes advanced AR far more accessible than before.
Despite its promise, Hypernova may still face hurdles. The Ceres wristband can struggle if worn loosely, and some testers have reported issues based on which arm it’s worn on or even when wearing long sleeves. In short, getting EMG input right for everyone will be critical.
Privacy is another major concern. In past experiments, researchers hacked Ray-Ban Meta glasses to run facial recognition, instantly identifying strangers and pulling personal info. Meta has added guidelines, like a recording indicator light, but critics argue these measures are too easy to ignore. Moreover, data captured by smart glasses can feed into AI training, raising questions about consent and surveillance.
The Meta Hypernova smart glasses mark a turning point in wearable tech. They’re lighter and more stylish than bulky AR headsets, while offering real-world features like navigation, messaging and hands-free control. At under US$1,000, they aim to make AR glasses more than a luxury gadget—they’re a step toward everyday use.
Whether Hypernova succeeds will depend on how well it balances style, usability and privacy. But one thing is clear: Meta is betting that always-on, glanceable AR can move from science fiction to daily life.
Here’s the story of how a quirky toy transformed into a worldwide phenomenon.
Trends move fast. One moment it's Dubai’s viral “Kunafa” chocolate bar, the next it’s Labubu—a mischievous-looking doll—racks up US$670 million in revenue this year, even outpacing Barbie and Hot Wheels. Celebrities like BLACKPINK’s Lisa and Dua Lipa have been spotted with Labubu dolls—whether as bag charms or in playful social posts.
For those unfamiliar, Labubu is the breakout character from the book series“The Monster” by Hong Kong-born, Belgium-based artist Kasing Lung. Alongside Labubu, the series features other quirky monsters like Zimomo, Mokoko and Tycoco—often grouped together as “Labubus”. These vinyl Labubu figures first entered the collectible scene in 2011 as “Monsters”, produced by Hong Kong-based production house How2Work. In 2019, Lung signed an exclusive licensing deal with Pop Mart, a Beijing-based toy collectible company, which further boosted the recognition and popularity of the franchise.
At first glance, Labubu might seem like just another fad. But the craze shows something deeper: in digital marketing, virality doesn’t happen by accident. It’s the result of timing, relatability and the rway global communities amplify trends.
So, what can marketers learn from the Labubu phenomenon? Let’s take a closer look.
Labubu’s unconventional aesthetics—a notorious grin, sharp teeth and wide eyes—break the traditional mold of “cute” toys. The social listening report from Meltwater, a media intelligence company reveals that from January to May 2025, mentions of “cute” outnumbered “ugly” nearly five to one. This “ugly-cute” look gave Labubu its identity and helped it stand out in a crowded market.
Marketing lesson: In a world of where everything blends together on endless feeds, uniqueness wins. Standing out with bold, even unconventional design choices can spark curiosity and desire. By leaning into what makes a product different, brands create instant recognition and give people something worth talking about.
Labubu’s surge in popularity is deeply rooted in Pop Mart’s focus on building genuine relationships with its fans. The company encourages user-generated content— unboxings, fan art, influencer stories—that fueled Labubu’s spread online and build brand engagement. Fans weren’t just buying toys; they were becoming part of a community that celebrated each new design.
Marketing lesson: Customers don’t want to feel like faceless buyers. They want to feel seen, heard and part of something bigger. By encouraging engagement and valuing contributions, brands can turn casual customers into loyal advocates who spread the word on their behalf.
While Pop Mart notes Labubu is most popular among women aged 18–30, its audience has broadened beyond that group. The design draws on influences from Nordic mythology and East Asian “kawaii” culture, making it feel both familiar and new to global audiences.
For Millennials and Gen Xers, Labubu also sparks nostalgia for toy crazes like Tickle Me Elmo and Beanie Babies that once lit up childhoods before fading away. Together, these layers of cultural resonance and cross-generational charm give Labubu an unusually broad reach.
Marketing lessons: Relatability is a powerful driver of virality. When a product can connect across generations and cultures, it expands far beyond a niche fan base. Brands that blend familiarity with novelty can build bridges to much larger audiences.
Labubu’s blind box model makes buying feel like a game. The thrill of not knowing which design you’d unwrap made collecting Labubus fun. It also turns buying into an emotional experience rather than a rational choice, fueling the urge to complete entire collections.
Besides, the suspense itself became content—millions watched unboxing videos to share in the excitement. Even BLACKPINK’s Lisa admitted she began with “only three to four” Labubus but soon wanted “a whole box” of the latest collection.
Marketing lesson: Mystery creates excitement, and excitement drives repeat purchases. By adding an element of surprise, brands can make the buying experience feels less like a transaction and more like a story unfolding. That thrill keeps customers coming back and makes the product easy to share online.
Pop Mart releases Labubus in limited drops, often tied to holidays or cultural events. Some editions include ultra-rare “chase” figures—appearing only once in every 144 boxes—creating a strong sense of urgency and fear-of-missing out (FOMO) among buyers. This strategy fuels a booming resale market, where regular figures retailing at US$25 can sell for US$200–US$300, and rare editions have even fetched prices up to US$150,000.
Marketing lessons: Scarcity isn’t just about limiting supply—it’s about building anticipation. By tying releases to events and sprinkling in rare editions, brands keep fans watching for the next drop. This combination of urgency and exclusivity transforms ordinary products into must-have collectibles.
Labubu has expanded its reach through creative brand collaborations. For instance, the Labubu x Coca-Cola series features figures in iconic red-and-white themes, while a Vans Old Skool drop merged streetwear in the clothing brand’s notable checkerboard pattern with collectibles. The One Piece collaboration blended Labubu’s quirky style with beloved anime heroes, appealing to fans of both worlds.
Marketing takeaway: Collaborations breathe fresh life into a brand and open doors to new audiences. Partnering with well-known names adds cultural weight and collectible value, while keeping the brand relevant in different communities. Done right, collaborations turn niche products into mainstream sensations.
Labubu’s phenomenal success is more than a passing craze. It’s proof that bold design, authentic community building, clever scarcity and cultural collaborations can transform a quirky idea into a global movement.
For marketers, the takeaway is simple: don’t just chase trends—create something real and let your community shape the story with you. Be bold, stay authentic and bring your fans along for the ride. That’s how brands move from fleeting hype to lasting cultural icons.
How startups can use nostalgia marketing to build trust, spark loyalty and stand out with storytelling, vintage design and emotional connections.
Turning the subtle power of nostalgia into meaningful marketing.
Think of nostalgia as a time machine for brands—it doesn’t just take people back; it brings their emotions forward. And emotions sell. For those who are unfamiliar, nostalgia marketing is a strategy where brands use elements from the past—like familiar sights, sounds, or stories—to evoke warm memories and emotional connections with their audience.
This emotional pull isn’t just anecdotal—research shows its real impact: according to The Team and Forbes via The Drum, 80% of millennials and Gen Z are drawn to brands tapping into nostalgia, while 92% of consumers say nostalgic ads feel more relatable. And for startups competing in noisy markets, this is a goldmine.
In this article, we’ll explore why nostalgia marketing can be a game-changing strategy for your company.
Out of all the popular marketing methods—like influencer partnerships or attention-grabbing ad campaigns—nostalgia is unique because its impact starts intrinsically, in the brain. By triggering the release of dopamine, a reward-system neurotransmitter, Nostalgia evokes feelings of warmth, happiness and comfort. Consequently, people don’t just remember a moment—they relive it. Take, for instance, your favorite cereal brand bringing back childhood cartoon characters or using retro fonts and colors. You might choose it over a healthier breakfast option simply because it reminds you of the mornings you enjoyed as a kid. Similarly, speaking of stirring fond memories, Coca-Cola has mastered this effect, using classic holiday ads, vintage packaging, and iconic imagery. Those associations make people see Coke as more than a drink—it’s a familiar feeling they’re willing to pay extra for.
New marketing campaigns can spark curiosity but often trigger skepticism—especially when audiences lack prior connection to the brand. Nostalgia marketing breaks down this barrier by tapping into familiarity, using retro jingles, vintage fonts, pastel colors, or familiar packaging that immediately resonate. This recognition builds an emotional connection and trust with the brand. More importantly, it fosters social connectedness by making consumers feel part of a larger community—giving that reassuring “others remember this too” feeling. As a result, this sense of belonging reduces loneliness, strengthens warmth and trust, and encourages word-of-mouth sharing, naturally amplifying the campaign’s reach and impact.
While luxury brands can afford massive campaigns, startups and small businesses can tap into nostalgia as a cost-effective storytelling tool. In a world where marketing often chases the “next big thing”—from AI to futuristic tech—nostalgia offers the opposite: a chance to revisit the past. More importantly, nostalgia allows brands to stand out in a crowded, fast-scrolling feed by delivering something comfortingly familiar with a fresh twist. Think of Polaroid: in an age where smartphones boast crystal-clear cameras, it wins hearts with pastel hues, a vintage lens, and the tactile charm of instant prints—selling not just images, but a moment that feels straight out of the past.
The same principle worked brilliantly for Tiffany & Co., whose 185-year-old brand refresh featured Jay-Z and Beyoncé in a Breakfast at Tiffany’s-inspired campaign, blending timeless charm with contemporary star power and racking up millions of views. In essence, when done right, nostalgia doesn’t just market a product—it invites people to relive a story they already love.
Nostalgia resonates across generations speaking to diverse audiences. For Millennials, it’s a chance to relive the cultural touchpoints of their youth, while Gen Z approaches it with curiosity, eager to explore eras they never experienced firsthand. This crossover creates a unique marketing sweet spot: one group is driven by memory, the other by discovery. Pokémon proves this power by keeping lifelong fans engaged through retro trading cards while introducing younger audiences to its history. Similarly, Nike used nostalgia to bridge two different generations by reissuing retro classics, keeping both longtime fans and new sneakerheads excited. By appealing to both memory and curiosity, brands can create lasting connections that keep different generations engaged at once.
Nostalgia can be your startup’s non-cliché marketing mantra. Imagine a small bookstore that offers handwritten recommendation cards designed like vintage library checkout slips. This simple touch invites customers to slow down and rediscover the joy of reading. Or picture a local coffee shop serving drinks in mugs inspired by classic diner ware, evoking comforting memories of simpler times. Overall, the lesson is clear: combining nostalgic design with stories that connect people to shared moments creates emotional warmth and trust. Thoughtful nostalgia turns everyday products into meaningful experiences—building loyal communities eager to return.