Consumer Tech

Next Gen Gates: AI Meets Fashion – Gates’ Bold Move to Dress the Future

With Phia’s AI, the new luxury is knowing what’s worth buying.

Updated

December 19, 2025 9:26 PM

Phoebe Gates and Sophia Kianni, founders of Phia. PHOTO: PHIA

AI has transformed how we shop—predicting trends, powering virtual try-ons and streamlining fashion logistics. Yet some of the biggest pain points remain: endless scrolling, too many tabs and never knowing if you’ve overpaid. That’s the gap Phia aims to close.

Co-founded by Phoebe Gates, daughter of Bill Gates, and climate activist Sophia Kianni, Phia was born in a Stanford dorm room and launched in April 2025. The app, available on mobile and as a browser extension, compares prices across over 40,000 retailers and thrift platforms to show what an item really costs. Its hallmark feature, “Should I Buy This?”, instantly flags whether something is overpriced, fair or a genuine deal.

The mission is simple: make shopping smarter, fairer and more sustainable. In just five months, Phia has attracted more than 500,000 users, indexed billions of products and built over 5,000 brand partnerships. It also secured a US$8 million seed round led by Kleiner Perkins, joined by Hailey Bieber, Kris Jenner, Sara Blakely and Sheryl Sandberg—investors who bridge tech, retail and culture. “Phia is redefining how people make purchase decisions,” said Annie Case, partner at Kleiner Perkins.  

Phia’s AI engine scans real-time data from more than 250 million products across its network, including Vestiaire Collective, StockX, eBay and Poshmark. Beyond comparing prices, the app helps users discover cheaper or more sustainable options by displaying pre-owned items next to new ones—helping users see the full spectrum of choices before they buy. It also evaluates how different brands perform over time, analysing how well their products hold resale value. This insight helps shoppers judge whether a purchase is likely to last in value or if opting for a second-hand version makes more sense. The result is a platform that naturally encourages circular shopping—keeping items in use longer through resale, repair or recycling—and resonates strongly with Gen Z and millennial values of sustainability and mindful spending.  

By encouraging transparency and smarter choices, Phia signals a broader shift in consumer technology: one where AI doesn’t just automate decisions but empowers users to understand them. Instead of merely digitizing the act of shopping, Phia embodies data-driven accountability—using intelligent search to help consumers make informed and ethical choices in markets long clouded by complexity. Retail analysts believe this level of visibility could push brands to maintain accurate and competitive pricing. Skeptics, however, argue that Phia must evolve beyond comparison to create emotional connection and loyalty. Still, one fact stands out: algorithms are no longer just recommending what we buy—they’re rewriting how we decide.  

With new funding powering GPU expansion and advanced personalization tools, Phia’s next step is to build a true AI shopping agent—one that helps people buy better, live smarter and rethink what it means to shop with purpose.  

Keep Reading

AI

What Happens When AI Writes the Wrong References?

HKU professor apologizes after PhD student’s AI-assisted paper cites fabricated sources.

Updated

November 28, 2025 4:18 PM

The University of Hong Kong in Pok Fu Lam, Hong Kong Island. PHOTO: ADOBE STOCK

It’s no surprise that artificial intelligence, while remarkably capable, can also go astray—spinning convincing but entirely fabricated narratives. From politics to academia, AI’s “hallucinations” have repeatedly shown how powerful technology can go off-script when left unchecked.

Take Grok-2, for instance. In July 2024, the chatbot misled users about ballot deadlines in several U.S. states, just days after President Joe Biden dropped his re-election bid against former President Donald Trump. A year earlier, a U.S. lawyer found himself in court for relying on ChatGPT to draft a legal brief—only to discover that the AI tool had invented entire cases, citations and judicial opinions. And now, the academic world has its own cautionary tale.

Recently, a journal paper from the Department of Social Work and Social Administration at the University of Hong Kong was found to contain fabricated citations—sources apparently created by AI. The paper, titled “Forty Years of Fertility Transition in Hong Kong,” analyzed the decline in Hong Kong’s fertility rate over the past four decades. Authored by doctoral student Yiming Bai, along with Yip Siu-fai, Vice Dean of the Faculty of Social Sciences and other university officials, the study identified falling marriage rates as a key driver behind the city’s shrinking birth rate. The authors recommended structural reforms to make Hong Kong’s social and work environment more family-friendly.

But the credibility of the paper came into question when inconsistencies surfaced among its references. Out of 61 cited works, some included DOI (Digital Object Identifier) links that led to dead ends, displaying “DOI Not Found.” Others claimed to originate from academic journals, yet searches yielded no such publications.

Speaking to HK01, Yip acknowledged that his student had used AI tools to organize the citations but failed to verify the accuracy of the generated references. “As the corresponding author, I bear responsibility”, Yip said, apologizing for the damage caused to the University of Hong Kong and the journal’s reputation. He clarified that the paper itself had undergone two rounds of verification and that its content was not fabricated—only the citations had been mishandled.

Yip has since contacted the journal’s editor, who accepted his explanation and agreed to re-upload a corrected version in the coming days. A formal notice addressing the issue will also be released. Yip said he would personally review each citation “piece by piece” to ensure no errors remain.

As for the student involved, Yip described her as a diligent and high-performing researcher who made an honest mistake in her first attempt at using AI for academic assistance. Rather than penalize her, Yip chose a more constructive approach, urging her to take a course on how to use AI tools responsibly in academic research.

Ultimately, in an age where generative AI can produce everything from essays to legal arguments, there are two lessons to take away from this episode. First, AI is a powerful assistant, but only that. The final judgment must always rest with us. No matter how seamless the output seems, cross-checking and verifying information remain essential. Second, as AI becomes integral to academic and professional life, institutions must equip students and employees with the skills to use it responsibly. Training and mentorship are no longer optional; they’re the foundation for using AI to enhance, not undermine, human work.

Because in this age of intelligent machines, staying relevant isn’t about replacing human judgment with AI, it’s about learning how to work alongside it.