Artificial Intelligence

How an AI Actor Is Reframing Hollywood’s Debate Over Artificial Intelligence

AI actor Tilly Norwood releases a musical video arguing that artificial intelligence can expand creativity in film

Updated

March 13, 2026 2:18 PM

AI Actor Tilly Norwood. PHOTO: INSTAGRAM@TILLYNORWOOD

As Hollywood prepares for this weekend’s Oscars, a different kind of performer is stepping into the spotlight — one that doesn’t physically exist.

Tilly Norwood, described as the world’s first AI actor, has released her debut musical comedy video, Take the Lead. The project arrives at a moment when artificial intelligence has become one of the most contentious topics in the film industry.

The message of the song is simple. AI should not be seen as a threat to actors. Instead, it can become another creative tool. The release also offers a first look at what Norwood’s creators call the “Tillyverse”. It is envisioned as a cloud-based entertainment world where AI characters can live, interact and perform.

Behind the character is actor and producer Eline van der Velden. She is the CEO of production company Particle6 and AI talent studio Xicoia. Van der Velden created Tilly as a way to experiment with how artificial intelligence could be used in storytelling.

The timing is not accidental. The entertainment industry has spent the past few years debating the role AI should play in filmmaking and acting. Questions about digital replicas, automated performances and creative ownership continue to divide artists and studios.

Norwood’s musical video enters that debate with a different tone. Instead of warning about AI replacing actors, the project suggests that the technology could expand what performers are able to do.

The video itself also serves as a technical experiment. The song Take the Lead was generated using the AI music platform Suno. The video was then produced using a combination of widely available AI tools and Particle6’s own creative process.

One of the newer techniques used in the project is performance capture. Van der Velden physically acted out Tilly’s movements and expressions so the digital character could mirror a human performance. But the production was far from automated. According to Particle6, a team of 18 people worked on the video. The group included a director, editor, production designer, costume designer, comedy writer and creative technologist. In other words, the project still relied heavily on human creativity.

“Tilly has always been a vehicle to test the creative capabilities and boundaries of AI,” van der Velden said. “It’s not about taking anyone’s job”. She added that even with powerful tools, good AI content still takes time, taste and creative direction.

The project also reflects how quickly production technology is evolving. Tools that once required large studios are now accessible to smaller creative teams experimenting with AI-driven storytelling.

For Particle6, the character of Tilly Norwood acts as a testing ground. Each project explores how AI performers might be developed, directed and integrated into entertainment. Whether audiences embrace digital actors remains an open question. Many in the industry are still wary of how AI could reshape creative work.

But projects like Take the Lead show another possibility. Instead of replacing performers, artificial intelligence could become part of the creative process itself. In that sense, Tilly Norwood may represent something more than a virtual performer. She is also an experiment in how humans and machines might collaborate in the future of entertainment.

Keep Reading

Artificial Intelligence

Rokid Glasses Get Smarter: Gemini ChatGPT Brings AI to AR Eyewear Worldwide

AI meets AR: How Rokid Glasses bring multilingual, real-time intelligence to smart eyewear globally

Updated

March 3, 2026 3:50 PM

Rokid's smart glasses. PHOTO: ROKID

Rokid, a Chinese company specializing in AI-powered smart eyewear and human–computer interaction, has rolled out a major software update for the international version of its Rokid Glasses. This update makes it the first smart glasses manufacturer to natively support Google’s Gemini, alongside three other leading large language models: OpenAI’s ChatGPT, Alibaba’s Qwen and DeepSeek.

The integration is powered by Rokid’s device-to-cloud architecture, which enables users to switch between AI models on the fly. In practice, this means a traveler can receive a real-time translation in Japanese using one AI model, then quickly switch to ChatGPT to answer a technical query—without noticeable delay. The system also supports multi-modal inputs like voice and gestures, making interactions more intuitive for everyday use.

This is more than a routine software update. By combining AI models from both U.S. and Chinese developers, Rokid is making its smart glasses relevant to global users, with features that adapt to local languages and preferences while maintaining high performance.  

These technological advancements have directly fueled Rokid’s international growth. Between November 2024 and October 2025, Shangpu Group data shows Rokid Glasses ranked No.1 in global sales for AI glasses with display functionality. Crowdfunding milestones further reflect this momentum: the product became the fastest smart glasses to raise over 100 million Japanese Yen on Japan’s MAKUAKE platform and broke Kickstarter records for smart eyewear.

Taken together, Rokid’s update highlights a shift in the smart glasses space: success increasingly comes from openness, flexibility and localized AI experiences rather than closed, single-platform ecosystems. By giving users choice, integrating global AI capabilities and bridging cultural and linguistic gaps, Rokid is positioning itself as a serious contender in the international AR and AI wearable market.