Technology

Meta’s Hypernova Smart Glasses: Features, Price & What to Expect

At under US$1,000, Hypernova isn’t just eyewear—it’s Meta’s push to make AR feel ordinary.

Updated

September 16, 2025 7:15 PM

Closeup of the Ray-Ban logo and the built-in ultra-wide 12 MP camera on a pair of new Ray-Ban Meta Wayfarer smart glasses. PHOTO: ADOBE STOCK

Meta is preparing to launch its next big wearable: the Hypernova smart glasses. Unlike earlier experiments like the Ray-Ban Stories, these new glasses promise more advanced features at a price point under US$1,000. With a launch set for September 17 at Meta’s annual Connect conference, the Hypernova is already drawing attention for blending design, technology and accessibility.  

In this article, let’s take a closer look at Hypernova’s design, features, pricing and the challenges Meta faces as it tries to bring smart glasses into everyday life.

Why Hypernova matters

Meta’s earlier Ray-Ban glasses offered cameras and audio but no display. Hypernova changes that: The glasses will ship with a built-in micro-display, giving wearers quick access to maps, messages, notifications and even Meta’s AI assistant. It’s a step toward everyday AR that feels useful and natural, not experimental.

Perhaps most importantly, the price makes them attainable. While early estimates placed the cost above US$1,000, Meta has committed to a launch price of around US$800. That’s still premium, but it moves AR smart glasses into reach for more consumers.  

Design and build

Hypernova weighs about 70 grams, roughly 20 grams heavier than the Ray-Ban Meta models. The added weight likely comes from added components like the new display and extra sensors.  

To keep the glasses stylish, Meta continues its partnership with EssilorLuxottica, the company behind Ray-Ban and Prada eyewear. Thicker frames—especially Prada’s designs—help hide the hardware like chips, microphones and batteries without making the glasses look oversized.

The glasses stick close to the classic Ray-Ban silhouette but feature slightly bulkier arms. On the left side, a touch-sensitive bar lets users control functions with taps and swipes. For example, a two-finger tap can trigger a photo or start video recording.

Expected features of Hypernova
Integrated display:  

Hypernova introduces something the earlier Ray-Ban glasses never had: a display built right into the lens. In the bottom-right corner of the right lens, a small micro-screen uses waveguide optics to project a digital overlay with about a 20° field of view. This means you can glance at turn-by-turn directions, check a notification or quickly consult Meta’s AI assistant without pulling out your phone. It’s discreet, practical and a major step up from the older models, which were limited to capturing photos and videos, handling calls and playing music via speakers.  

Gesture controls with neural wristband:  

Alongside the glasses comes the Ceres wristband, a companion device powered by electromyography (EMG). The band picks up the tiny electrical signals in your wrist and fingers, translating them into commands. A pinch might let you select something, a wrist flick could scroll a page, and a swipe could move between screens. The idea is to avoid clunky buttons or having to talk to your glasses in public. Meta has also been experimenting with handwriting recognition through the band, though it’s not clear if that feature will be ready in time for launch.  

Built-in gaming:

Meta doesn’t just want Hypernova to be useful—it wants it to be fun. Code found in leaked firmware revealed a small game called Hypertrail. It looks to borrow ideas from the 1981 arcade shooter Galaga, letting wearers play a simple, retro-inspired game right through their glasses. It’s not the main attraction, but it shows Meta is trying to make Hypernova feel more like a playful everyday gadget rather than just a piece of serious tech.  

App ecosystem:

Hypernova runs on a customized version of Android and pairs with smartphones through the Meta View app. Out of the box, it should support the basics: calls, music and message notifications. Leaks suggest several apps will come preinstalled, including Camera, Gallery, Maps, WhatsApp, Messenger and Meta AI. A Qualcomm processor powers the whole setup, helping it run smoothly while keeping energy demands reasonable.  

Meta is also trying to bring in outside developers. In August 2025, CNBC reported that the company invited third-party developers—especially in generative AI—to build experimental apps for Hypernova and the Ceres wristband. The Meta Connect 2025 agenda even highlights sessions on a new smart glasses SDK and toolkit. The push shows Meta’s interest in making Hypernova more than just a device; it wants a broader platform with apps that go beyond its own first-party software.  

Pricing strategy: Why under US$1,000 matters

During development, Hypernova was rumored to cost as much as US$1,400. By pricing it around US$800, Meta signals that it wants adoption more than profit. The company is keeping production limited (around 150,000 units), showing it sees this as a market test rather than a mass rollout. Still, the sub-US$1,000 price tag makes advanced AR far more accessible than before.

Challenges ahead

Despite its promise, Hypernova may still face hurdles. The Ceres wristband can struggle if worn loosely, and some testers have reported issues based on which arm it’s worn on or even when wearing long sleeves. In short, getting EMG input right for everyone will be critical.

Privacy is another major concern. In past experiments, researchers hacked Ray-Ban Meta glasses to run facial recognition, instantly identifying strangers and pulling personal info. Meta has added guidelines, like a recording indicator light, but critics argue these measures are too easy to ignore. Moreover, data captured by smart glasses can feed into AI training, raising questions about consent and surveillance.

The bottom line

The Meta Hypernova smart glasses mark a turning point in wearable tech. They’re lighter and more stylish than bulky AR headsets, while offering real-world features like navigation, messaging and hands-free control. At under US$1,000, they aim to make AR glasses more than a luxury gadget—they’re a step toward everyday use.

Whether Hypernova succeeds will depend on how well it balances style, usability and privacy. But one thing is clear: Meta is betting that always-on, glanceable AR can move from science fiction to daily life.

Keep Reading

Health Care

The Rise of AI Companions: How Virtual Support is Redefining Mental Health Care

Can AI companions really help with our mental health?

Updated

September 16, 2025 7:23 PM

A laptop with the text "MENTAL HEALTH" displayed. PHOTO: PEXELS

As technology continues to weave itself into the fabric of our daily lives, it’s starting to play an unexpected role: supporting our mental health. AI companions—digital entities designed to hold natural, empathetic conversations—are emerging as a new frontier in emotional care. Unlike chatbots of the past, these AI companions leverage advanced algorithms and emotional intelligence to provide personalized support, making them more than just tools. They are companions in every sense of the word—always available, always listening, and always ready to offer comfort. But can AI companions truly help us feel better, or are they just another tech trend? Let’s dive into how these digital allies are reshaping mental health care and what their growing presence means for our emotional well-being.

Bridging the gap: connection in a disconnected world

Loneliness is often called an epidemic, with millions of people worldwide feeling isolated or disconnected. While human relationships are irreplaceable, AI companions offer a consistent and accessible alternative to combat feelings of loneliness.

These companions don’t just respond—they engage. They remember your preferences, ask follow-up questions, and adapt their conversations to your needs. Imagine having someone to talk to at any time of day, about anything on your mind, without fear of judgment. AI companions may not replace a human friend, but they can provide a sense of presence and connection that can be profoundly comforting.

In a world where reaching out to others can sometimes feel daunting, AI companions offer a simple solution: they’re always there. This consistency can help people feel less alone, fostering a sense of connection in an increasingly disconnected world.

Emotional support: a calm voice in the chaos

We all experience moments of stress, sadness, or doubt, and having someone to turn to during those times can make all the difference. AI companions are designed with emotional intelligence, enabling them to recognize and respond to your feelings in real time.

Through sentiment analysis and adaptive learning, these companions can detect when you’re feeling low and tailor their responses to provide comfort. Whether it’s offering words of encouragement, suggesting self-care activities, or simply listening, they provide a safe space to process emotions.

Unlike traditional apps that focus on tracking habits or delivering generic advice, AI companions meet you where you are emotionally. This personalized approach can help users feel truly supported, even in their most challenging moments.

A safe space for self-expression

For many of us, expressing our thoughts and emotions openly can feel like a risk. Fear of judgment, misunderstanding, or even burdening others often holds us back. AI companions offer an alternative: a completely private, judgment-free space to share whatever is on your mind.

Talking things out—whether it’s frustrations from the day or deeper personal struggles—can be incredibly therapeutic. And with AI companions, there’s no need to worry about being misunderstood or dismissed. You can let your guard down, explore your feelings, and reflect on your experiences with total freedom.

This safe space for self-expression can be especially valuable for those who struggle to open up to others. It’s not about replacing human relationships but about having an outlet that’s always available and entirely focused on you.

Building confidence, one conversation at a time

Self-doubt is a common barrier to personal growth, and many of us battle negative self-talk daily. AI companions are programmed to combat this by offering positive reinforcement and encouragement.

For example, if you express doubt about your abilities, an AI companion might respond with affirmations like, “You’ve accomplished so much already—don’t forget how capable you are.” Over time, these small but meaningful interactions can help shift your mindset, replacing self-criticism with self-compassion.

This ability to mirror supportive, affirming conversations can build confidence and foster a more positive self-image. It’s a subtle but powerful way AI companions can contribute to emotional well-being.

Final thoughts

AI companions are more than just a tech trend; they represent a new way of thinking about mental health care. By offering companionship, emotional support, safe spaces for self-expression, and tools for mindfulness, they empower users to take control of their well-being.

While they may not replace traditional methods of care, AI companions are making mental health support more accessible, immediate, and personalized. They’re a reminder that sometimes, the smallest interactions—an encouraging word, a moment of mindfulness, or a listening ear—can have the biggest impact.

As we embrace this new era of technology, one thing is clear: AI companions are not just about convenience. They’re about connection, support, and the potential to make emotional care a part of everyday life. And in a world that often feels disconnected, that’s something worth celebrating.