Artificial Intelligence

What Happens When AI Writes the Wrong References?

HKU professor apologizes after PhD student’s AI-assisted paper cites fabricated sources.

Updated

January 8, 2026 6:33 PM

The University of Hong Kong in Pok Fu Lam, Hong Kong Island. PHOTO: ADOBE STOCK

It’s no surprise that artificial intelligence, while remarkably capable, can also go astray—spinning convincing but entirely fabricated narratives. From politics to academia, AI’s “hallucinations” have repeatedly shown how powerful technology can go off-script when left unchecked.

Take Grok-2, for instance. In July 2024, the chatbot misled users about ballot deadlines in several U.S. states, just days after President Joe Biden dropped his re-election bid against former President Donald Trump. A year earlier, a U.S. lawyer found himself in court for relying on ChatGPT to draft a legal brief—only to discover that the AI tool had invented entire cases, citations and judicial opinions. And now, the academic world has its own cautionary tale.

Recently, a journal paper from the Department of Social Work and Social Administration at the University of Hong Kong was found to contain fabricated citations—sources apparently created by AI. The paper, titled “Forty Years of Fertility Transition in Hong Kong,” analyzed the decline in Hong Kong’s fertility rate over the past four decades. Authored by doctoral student Yiming Bai, along with Yip Siu-fai, Vice Dean of the Faculty of Social Sciences and other university officials, the study identified falling marriage rates as a key driver behind the city’s shrinking birth rate. The authors recommended structural reforms to make Hong Kong’s social and work environment more family-friendly.

But the credibility of the paper came into question when inconsistencies surfaced among its references. Out of 61 cited works, some included DOI (Digital Object Identifier) links that led to dead ends, displaying “DOI Not Found.” Others claimed to originate from academic journals, yet searches yielded no such publications.

Speaking to HK01, Yip acknowledged that his student had used AI tools to organize the citations but failed to verify the accuracy of the generated references. “As the corresponding author, I bear responsibility”, Yip said, apologizing for the damage caused to the University of Hong Kong and the journal’s reputation. He clarified that the paper itself had undergone two rounds of verification and that its content was not fabricated—only the citations had been mishandled.

Yip has since contacted the journal’s editor, who accepted his explanation and agreed to re-upload a corrected version in the coming days. A formal notice addressing the issue will also be released. Yip said he would personally review each citation “piece by piece” to ensure no errors remain.

As for the student involved, Yip described her as a diligent and high-performing researcher who made an honest mistake in her first attempt at using AI for academic assistance. Rather than penalize her, Yip chose a more constructive approach, urging her to take a course on how to use AI tools responsibly in academic research.

Ultimately, in an age where generative AI can produce everything from essays to legal arguments, there are two lessons to take away from this episode. First, AI is a powerful assistant, but only that. The final judgment must always rest with us. No matter how seamless the output seems, cross-checking and verifying information remain essential. Second, as AI becomes integral to academic and professional life, institutions must equip students and employees with the skills to use it responsibly. Training and mentorship are no longer optional; they’re the foundation for using AI to enhance, not undermine, human work.

Because in this age of intelligent machines, staying relevant isn’t about replacing human judgment with AI, it’s about learning how to work alongside it.

Keep Reading

Startup Profiles

How Startup xCREW Is Building a Different Kind of Running Platform

A look at how motivation, not metrics, is becoming the real frontier in fitness tech

Updated

February 7, 2026 2:18 PM

A group of people running together. PHOTO: FREEPIK

Most running apps focus on measurement. Distance, pace, heart rate, badges. They record activity well, but struggle to help users maintain consistency over time. As a result, many people track diligently at first, then gradually disengage.

That drop-off has pushed developers to rethink what fitness technology is actually for. Instead of just documenting activity, some platforms are now trying to influence behaviour itself. Paceful, an AI-powered running platform developed by SportsTech startup xCREW, is part of that shift — not by adding more metrics, but by focusing on how people stay consistent.  The platform is built on a simple behavioural insight: most people don’t stop exercising because they don’t care about health. They stop because routines are fragile. Miss a few days and the habit collapses. Technology that focuses only on performance metrics doesn’t solve that. Systems that reinforce consistency, belonging and feedback loops might.

Instead of treating running as a solo, data-driven task, Paceful is built around two ideas: behavioural incentives and social alignment. The system turns real-world running activity into tangible rewards and it uses AI to connect runners to people, clubs and challenges that fit how and where they actually run.


At the technical level, Paceful connects with existing fitness ecosystems. Users can import workout data from platforms like Apple Health and Strava rather than starting from scratch. Once inside the system, AI models analyse pace, frequency, location and participation patterns. That data is used to recommend running partners, clubs and group challenges that match each runner’s habits and context.


What makes this approach different is not the tracking itself, but what the platform does with the data it collects. Running distance and consistency become inputs for a reward system that offers physical-world incentives, such as gear, race entries or gift cards. The idea is to link effort to something concrete, rather than abstract. The company also built the system around community logic rather than individual competition. Even solo runners are placed into challenge formats designed to simulate the motivation of a group. In practice, that means users feel part of a shared structure even when running alone.

During a six-month beta phase in the US, xCREW tested Paceful with more than 4,000 running clubs and around 50,000 runners. According to the company, users increased their running frequency significantly and weekly retention remained unusually high for a fitness platform. One beta tester summed it up this way: “Strava just logs records, but Paceful rewards you for every run, which is a completely different motivation”.

The company has raised seed funding and plans to expand the platform beyond running, walking, trekking, cycling and swimming. Instead of asking how accurately technology can measure the body, platforms like Paceful are asking a different question: how technology might influence everyday behaviour. Not by adding more data, but by shaping the conditions around effort, feedback and social connection.

As AI becomes more common in consumer products, its real impact may depend less on how advanced the models are and more on what they are applied to. In this case, the focus isn’t speed or performance — it’s consistency. And whether systems like this can meaningfully support it over time.