Brains, bots and the future: Who’s really in control?
Updated
November 28, 2025 4:06 PM

Adoration and disdain, the polarised reactions for generative AI. ILLUSTRATION: YORKE YU
When British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton joked that his ex-girlfriend once used ChatGPT to help her break up with him, he wasn’t exaggerating. The father of deep learning was pointing to something stranger: how machines built to mimic language have begun to mimic thought — and how even their creators no longer agree on what that means.
In that one quip — part humor, part unease — Hinton captured the paradox at the center of the world’s most important scientific divide. Artificial intelligence has moved beyond code and circuits into the realm of psychology, economics and even philosophy. Yet among those who know it best, the question has turned unexpectedly existential: what, if anything, do large language models truly understand?
Across the world’s AI labs, that question has split the community into two camps — believers and skeptics, prophets and heretics. One side sees systems like ChatGPT, Claude, and Gemini as the dawn of a new cognitive age. The other insists they’re clever parrots with no grasp of meaning, destined to plateau as soon as the data runs out. Between them stands a trillion-dollar industry built on both conviction and uncertainty.
Hinton, who spent a decade at Google refining the very neural networks that now power generative AI, has lately sounded like a man haunted by his own invention. Speaking to Scott Pelley on the CBS 60 Minutes interview aired October 8, 2023, Hinton said, “I think we're moving into a period when for the first time ever we may have things more intelligent than us.” . He said it not with triumph, but with visible worry.
Yoshua Bengio, his longtime collaborator, sees it differently. Speaking at the All In conference in Montreal, he told TIME that future AI systems "will have stronger and stronger reasoning abilities, more and more knowledge," while cautioning about ensuring they "act according to our norms". And then there’s Gary Marcus, the cognitive scientist and enduring critic, who dismisses the hype outright: “These systems don’t understand the world. They just predict the next word.”
It’s a rare moment in science when three pioneers of the same field disagree so completely — not about ethics or funding, but about the very nature of progress. And yet that disagreement now shapes how the future of AI will unfold.
In the span of just two years, large language models have gone from research curiosities to corporate cornerstones. Banks use them to summarize reports. Lawyers draft contracts with them. Pharmaceutical firms explore protein structures through them. Silicon Valley is betting that scaling these models — training them on ever-larger datasets with ever-denser computers — will eventually yield something approaching reasoning, maybe even intelligence.
It’s the “bigger is smarter” philosophy, and it has worked — so far. OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini have grown exponentially in capability . They can write code, explain math, outline business plans, even simulate empathy. For most users, the line between prediction and understanding has already blurred beyond meaning. Kelvin So, who is now conducting AI research in PolyU SPEED, commented , “AI scientists today are inclined to believe we have learnt a bitter lesson in the advancement from the traditional AI to the current LLM paradigm. That said, scaling law, instead of human-crafted complicated rules, is the ultimate law governing AI.”
But inside the labs, cracks are showing. Scaling models have become staggeringly expensive, and the returns are diminishing. A growing number of researchers suspect that raw scale alone cannot unlock true comprehension — that these systems are learning syntax, not semantics; imitation, not insight.
That belief fuels a quiet counter-revolution. Instead of simply piling on data and GPUs, some researchers are pursuing hybrid intelligence — systems that combine statistical learning with symbolic reasoning, causal inference, or embodied interaction with the physical world. The idea is that intelligence requires grounding — an understanding of cause, consequence, and context that no amount of text prediction can supply.
Yet the results speak for themselves. In practice, language models are already transforming industries faster than regulation can keep up. Marketing departments run on them. Customer support, logistics and finance teams depend on them. Even scientists now use them to generate hypotheses, debug code and summarize literature. For every cautionary voice, there are a dozen entrepreneurs who see this technology as a force reshaping every industry. That gap — between what these models actually are and what we hope they might become — defines this moment. It’s a time of awe and unease, where progress races ahead even as understanding lags behind.
Part of the confusion stems from how these systems work. A large language model doesn’t store facts like a database. It predicts what word is most likely to come next in a sequence, based on patterns in vast amounts of text. Behind this seemingly simple prediction mechanism lies a sophisticated architecture. The tokenizer is one of the key innovations behind modern language models. It takes text and chops it into smaller, manageable pieces the AI can understand. These pieces are then turned into numbers, giving the model a way to “read” human language. By doing this, the system can spot context and relationships between words — the building blocks of comprehension.
Inside the model, mechanisms such as multi-head attention enable the system to examine many aspects of information simultaneously, much as a human reader might track several storylines at once.
Reinforcement learning, pioneered by Richard Sutton, a professor of computing science at the University of Alberta, and Andrew Barto, Professor Emeritus at the University of Massachusetts, mimics human trial-and-error learning. The AI develops “value functions” that predict the long-term rewards of its actions. Together, these technologies enable machines to recognize patterns, make predictions and generate text that feels strikingly human — yet beneath this technical progress lies the very divide that cuts to the heart of how intelligence itself is defined.
This placement works well because it elaborates on the technical foundations after the article introduces the basic concept of how language models work, and before it transitions to discussing the emergent behaviors and the “black box problem.”
Yet at scale, that simple process begins to yield emergent behavior — reasoning, problem-solving, even flashes of creativity that surprise their creators. The result is something that looks, sounds and increasingly acts intelligent — even if no one can explain exactly why.
That opacity worries not just philosophers, but engineers. The “black box problem” — our inability to interpret how neural networks make decisions — has turned into a scientific and safety concern. If we can’t explain a model’s reasoning, can we trust it in critical systems like healthcare or defense?
Companies like Anthropic are trying to address that with “constitutional AI,” embedding human-written principles into model training to guide behavior. Others, like OpenAI, are experimenting with internal oversight teams and adversarial testing to catch dangerous or misleading outputs. But no approach yet offers real transparency. We’re effectively steering a ship whose navigation system we don’t fully understand. “We need governance frameworks that evolve as quickly as AI itself,” says Felix Cheung, Founding Chairman of RegTech Association of Hong Kong (RTAHK). “Technical safeguards alone aren't enough — transparent monitoring and clear accountability must become industry standards.”
Meanwhile, the commercial race is accelerating. Venture capital is flowing into AI startups at record speed. OpenAI’s valuation reportedly exceeds US$150 billion; Anthropic, backed by Amazon and Google, isn’t far behind. The bet is simple: that generative AI will become as indispensable to modern life as the internet itself.
And yet, not everyone is buying into that vision. The open-source movement — championed by players like Meta’s Llama, Mistral in France, and a fast-growing constellation of independent labs — argues that democratizing access is the only way to ensure both innovation and accountability. If powerful AI remains locked behind corporate walls, they warn, progress will narrow to the priorities of a few firms.
But openness cuts both ways. Publicly available models are harder to police, and their misuse — from disinformation to deepfakes — grows as easily as innovation does. Regulators are scrambling to balance risk and reward. The European Union’s AI Act is the world’s most comprehensive attempt at governance, but even it struggles to define where to draw the line between creativity and control.
This isn’t just a scientific argument anymore. It’s a geopolitical one. The United States, China, and Europe are each pursuing distinct AI strategies: Washington betting on private-sector dominance, Beijing on state-led scaling, Brussels on regulation and ethics. Behind the headlines, compute power is becoming a form of soft power. Whoever controls access to the chips, data, and infrastructure that fuel AI will control much of the digital economy.
That reality is forcing some uncomfortable math. Training frontier models already consumes energy on the scale of small nations. Data centers now rise next to hydroelectric dams and nuclear plants. Efficiency — once a technical concern — has become an economic and environmental one. As demand grows, so does the incentive to build smaller, smarter, more efficient systems. The industry’s next leap may not come from scale at all, but from constraint.
For all the noise, one truth keeps resurfacing: large language models are tools, not oracles. Their intelligence — if we can call it that — is borrowed from ours. They are trained on human text, human logic, human error. Every time a model surprises us with insight, it is, in a sense, holding up a mirror to collective intelligence.
That’s what makes this schism so fascinating. It’s not really about machines. It’s about what we believe intelligence is — pattern or principle, simulation or soul. For believers like Bengio, intelligence may simply be prediction done right. For critics like Marcus, that’s a category mistake: true understanding requires grounding in the real world, something no model trained on text can ever achieve.
The public, meanwhile, is less interested in metaphysics. To most users, these systems work — and that’s enough. They write emails, plan trips, debug spreadsheets, summarize meetings. Whether they “understand” or not feels academic. But for the scientists, that distinction remains critical, because it determines where AI might ultimately lead.
Even inside the companies building them, that tension shows OpenAI’s Sam Altman has hinted that scaling can’t continue forever. At some point, new architectures — possibly combining logic, memory, or embodied data — will be needed. DeepMind’s Demis Hassabis says something similar: intelligence, he argues, will come not just from prediction, but from interaction with the world.
It’s possible both are right. The future of AI may belong to hybrid systems — part statistical, part symbolic — that can reason across multiple modes of information: text, image, sound, action. The line between model and agent is already blurring, as LLMs gain the ability to browse the web, run code, and call external tools. The next generation won’t just answer questions; it will perform tasks.
For startups, the opportunity — and the risk — lies in that transition. The most valuable companies in this new era may not be those that build the biggest models, but those that build useful ones: specialized systems tuned for medicine, law, logistics, or finance, where reliability matters more than raw capability. The winners will understand that scale is a means, not an end.
And for society, the challenge is to decide what kind of intelligence we want to live with. If we treat these models as collaborators — imperfect, explainable, constrained — they could amplify human potential on a scale unseen since the printing press. If we chase the illusion of autonomy, they could just as easily entrench bias, confusion, and dependency.
The debate over large language models will not end in a lab. It will play out in courts, classrooms, boardrooms, and living rooms — anywhere humans and machines learn to share the same cognitive space. Whether we call that cooperation or competition will depend on how we design, deploy, and, ultimately, define these tools.
Perhaps Hinton’s offhand remark about being psychoanalyzed by his own creation wasn’t just a joke. It was an omen. AI is no longer something we use; it’s something we’re reflected in. Every model trained on our words becomes a record of who we are — our reasoning, our prejudices, our brilliance, our contradictions. The schism among scientists mirrors the one within ourselves: fascination colliding with fear, ambition tempered by doubt.
In the end, the question isn’t whether LLMs are the future. It’s whether we are ready for a future built in their image.
Keep Reading
The new workplace literacy is here, and it’s digital.
Updated
November 27, 2025 3:26 PM
.jpg)
A group of office worker attending a presentation in a meeting room. PHOTO: UNSPLASH
The modern workplace is powered by technology, and success increasingly depends on how well employees can use it. Digital fluency—the ability to confidently and effectively use digital tools to achieve goals—is no longer a bonus skill; it’s a necessity. It goes beyond basic technical know-how, encompassing the ability to adapt to new technologies, integrate them into workflows, and use them to solve problems and drive innovation.
Yet, despite its importance, many organizations struggle to build digital fluency across their teams. Barriers such as limited access to technology, outdated training programs, resistance to change, and gaps in leadership support often stand in the way. These challenges can leave businesses lagging behind competitors who are better prepared to leverage the potential of the digital age.
Understanding and addressing these barriers is critical for creating a workforce that thrives in today’s fast-changing world. Below, we explore the key obstacles to digital fluency and provide actionable strategies to overcome them.
One of the challenges to digital fluency is the gap between the technology available and employees’ ability to use it effectively. Technology evolves rapidly, but many organizations lag behind in providing relevant, up-to-date training. Employees may receive a one-time introduction to new tools but lack ongoing opportunities to build confidence or master advanced features.
This issue is compounded by the fact that training often takes a one-size-fits-all approach, failing to address the diverse skill levels within a workforce. For example, while some employees may only need a basic overview of a tool, others may require in-depth knowledge to integrate it into their roles effectively. Without tailored and continuous training, even the most advanced tools can go under utilized, leading to frustration and resistance.
Even with proper training, employees may hesitate to adopt new technologies. Resistance to change is a deeply rooted challenge that goes beyond technical skills—it’s tied to fear of failure, skepticism about the value of new tools, or discomfort with disrupting existing workflows.
For example, employees who have been using the same systems for years may feel overwhelmed by the idea of learning something new. They may worry that new technologies will complicate their work rather than simplify it. In some cases, they may even feel their jobs are threatened by automation or digital tools.
This resistance isn’t limited to employees—it can also exist at the leadership level. If leaders themselves are hesitant to adopt new approaches, it creates a top-down culture that stifles innovation.
The lack of organizational alignment is another significant barrier. Digital tools often roll out unevenly across departments, leading to fragmented adoption. For instance, one team might embrace a new project management tool, while another continues to rely on spreadsheets. This inconsistency creates silos, disrupts collaboration, and makes it harder for organizations to achieve the full benefits of digital transformation.
Generational differences can further exacerbate this issue. Younger employees, who are often more comfortable with technology, may adopt new tools quickly, while older employees may struggle to keep up. This divide can lead to frustration on both sides and uneven levels of digital proficiency across the organization.
Leadership plays a critical role in driving digital transformation, but in many organizations, this support is inconsistent or absent. Some leaders fail to prioritize digital fluency as a strategic initiative, while others may not fully understand the tools themselves, making it difficult to set an example for their teams.
Without clear direction from leadership, employees may not see digital fluency as a priority. This lack of alignment can lead to half-hearted adoption, where technology is seen as an optional add-on rather than a fundamental part of the organization’s success.
These barriers don’t exist in isolation—they are deeply interconnected. For example, outdated training practices can fuel resistance to change, while fragmented adoption across teams is often a symptom of weak leadership support. Together, they create a cycle that limits an organization’s ability to adapt, innovate, and thrive in a fast-changing world.
Addressing these challenges is critical for building a workforce that is confident, capable, and ready to embrace the future. By breaking down these barriers, organizations can unlock the full potential of their teams and position themselves for long-term success.
Training should not be an afterthought or a one-time event—it must be a continuous and personalized process. Employees come with diverse skill levels, and a one-size-fits-all training program often fails to address these differences. Organizations should adopt a multi-pronged approach to training, offering workshops for hands-on learners, e-learning modules for self-paced learning, and one-on-one coaching for employees who need more targeted support.
For example, companies like AT&T have invested heavily in workforce retraining initiatives, providing employees with a structured path to build digital skills overtime. These programs not only improve employee confidence but also help organizations fully leverage their digital tools.
Moreover, training programs should evolve to keep up with technological advancements. Employees need regular refreshers to stay current, as even the most advanced tools can become obsolete or under utilized without proper guidance. By making training a core part of the organizational culture, companies can empower employees to adapt to new tools with ease and confidence.
Resistance to change is a major barrier to digital fluency, often fueled by employees’ fear of failure or inefficiency when using new tools. To address this, organizations should foster a culture where employees feel safe experimenting with technologies in low-stakes environments, such as “sandbox environments” that allow for practice without affecting real workflows. When employees are encouraged to test new tools and processes in a low-stakes environment, they become more comfortable with technology over time.
Recognizing and rewarding employees who embrace new tools or suggest innovative ways to use them reinforces this mindset. Early adopters can serve as champions for digital fluency, encouraging others to engage with and explore new technologies.
By normalizing experimentation, organizations can shift employees from resisting change to confidently adopting digital tools as opportunities for growth.
To avoid fragmented adoption, organizations must ensure that digital tools are implemented consistently across teams. This requires clear communication, cross-departmental collaboration, and alignment on how tools will be used to achieve shared goals.
Mentorship programs can help bridge generational divides, pairing younger employees with older colleagues to share knowledge and skills.
Leaders play a pivotal role in overcoming barriers to digital fluency. They don’t just drive the adoption of digital tools—they shape how employees perceive and engage with them. When leaders actively embrace technology, they demonstrate its value and set a standard for others to follow.
Leadership involvement must go beyond symbolic gestures. Employees are far more likely to adopt new tools or processes when they see their leaders using them effectively in day-to-day work. For example, a manager who uses a team collaboration platform to streamline communications or leverages data visualization tools in meetings signals the practical benefits of these technologies. This hands-on engagement builds trust and encourages others to follow suit.
Equally important is leaders’ ability to connect digital tools to broader organizational goals. Employees need to understand how these tools contribute to solving real problems, improving workflows, or driving innovation. When leaders clearly communicate the "why" behind digital initiatives, it helps employees see digital fluency as a shared mission rather than an abstract directive.
Digital fluency isn’t just about mastering tools—it’s about creating a workplace where adaptability, curiosity, and collaboration thrive. It’s about empowering employees to see technology not as a hurdle but as an opportunity to innovate, grow, and solve problems in new ways.
At its heart, digital fluency is a shared effort, requiring leaders who inspire, teams that align, and cultures that embrace experimentation and learning. When organizations commit to breaking down barriers—whether through better training, stronger leadership, or fostering collaboration—they unlock the full potential of their people and their tools.
The future belongs to organizations that don’t just adopt technology but embed it into their culture, enabling their teams to thrive in an ever-changing digital landscape. The question now is not whether we can keep up with change, but how far we can go when we embrace it fully.