Artificial Intelligence

Is LLMs the Future? The Great AI Schism Among Scientists

Brains, bots and the future: Who’s really in control?

Updated

January 8, 2026 6:32 PM

Adoration and disdain, the polarised reactions for generative AI. ILLUSTRATION: YORKE YU

When British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton joked that his ex-girlfriend once used ChatGPT to help her break up with him, he wasn’t exaggerating.  The father of deep learning was pointing to something stranger: how machines built to mimic language have begun to mimic thought — and how even their creators no longer agree on what that means.

In that one quip — part humor, part unease — Hinton captured the paradox at the center of the world’s most important scientific divide. Artificial intelligence has moved beyond code and circuits into the realm of psychology, economics and even philosophy. Yet among those who know it best, the question has turned unexpectedly existential: what, if anything, do large language models truly understand?  

Across the world’s AI labs, that question has split the community into two camps — believers and skeptics, prophets and heretics. One side sees systems like ChatGPT, Claude, and Gemini as the dawn of a new cognitive age. The other insists they’re clever parrots with no grasp of meaning, destined to plateau as soon as the data runs out. Between them stands a trillion-dollar industry built on both conviction and uncertainty.

Hinton, who spent a decade at Google refining the very neural networks that now power generative AI, has lately sounded like a man haunted by his own invention. Speaking to Scott Pelley on the CBS 60 Minutes interview aired October 8, 2023, Hinton said, “I think we're moving into a period when for the first time ever we may have things more intelligent than us.” . He said it not with triumph, but with visible worry.

Yoshua Bengio, his longtime collaborator, sees it differently. Speaking at the All In conference in Montreal, he told TIME that future AI systems "will have stronger and stronger reasoning abilities, more and more knowledge," while cautioning about ensuring they "act according to our norms". And then there’s Gary Marcus, the cognitive scientist and enduring critic, who dismisses the hype outright: “These systems don’t understand the world. They just predict the next word.”    

It’s a rare moment in science when three pioneers of the same field disagree so completely — not about ethics or funding, but about the very nature of progress. And yet that disagreement now shapes how the future of AI will unfold.

In the span of just two years, large language models have gone from research curiosities to corporate cornerstones. Banks use them to summarize reports. Lawyers draft contracts with them. Pharmaceutical firms explore protein structures through them. Silicon Valley is betting that scaling these models — training them on ever-larger datasets with ever-denser computers — will eventually yield something approaching reasoning, maybe even intelligence.

It’s the “bigger is smarter” philosophy, and it has worked — so far. OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini have grown exponentially in capability  . They can write code, explain math, outline business plans, even simulate empathy. For most users, the line between prediction and understanding has already blurred beyond meaning. Kelvin So, who is now conducting AI research in PolyU SPEED, commented  , “AI scientists today are inclined to believe we have learnt a bitter lesson in the advancement from the traditional AI to the current LLM paradigm. That said, scaling law, instead of human-crafted complicated rules, is the ultimate law governing AI.”  

But inside the labs, cracks are showing. Scaling models have become staggeringly expensive, and the returns are diminishing. A growing number of researchers suspect that raw scale alone cannot unlock true comprehension — that these systems are learning syntax, not semantics; imitation, not insight.  

That belief fuels a quiet counter-revolution. Instead of simply piling on data and GPUs, some researchers are pursuing hybrid intelligence   — systems that combine statistical learning with symbolic reasoning, causal inference, or embodied interaction with the physical world. The idea is that intelligence requires grounding — an understanding of cause, consequence, and context that no amount of text prediction can supply.

Yet the results speak for themselves.  In practice, language models are already transforming industries faster than regulation can keep up. Marketing departments run on them. Customer support, logistics and finance teams depend on them. Even scientists now use them to generate hypotheses, debug code and summarize literature. For every cautionary voice, there are a dozen entrepreneurs who see this technology as a force reshaping every industry. That gap — between what these models actually are and what we hope they might become — defines this moment. It’s a time of awe and unease, where progress races ahead even as understanding lags behind.  

Part of the confusion stems from how these systems work. A large language model doesn’t store facts like a database. It predicts what word is most likely to come next in a sequence, based on patterns in vast amounts of text. Behind this seemingly simple prediction mechanism lies a sophisticated architecture. The tokenizer is one of the key innovations behind modern language models. It takes text and chops it into smaller, manageable pieces the AI can understand. These pieces are then turned into numbers, giving the model a way to “read” human language. By doing this, the system can spot context and relationships between words — the building blocks of comprehension.  

Inside the model, mechanisms such as multi-head attention enable the system to examine many aspects of information simultaneously, much as a human reader might track several storylines at once.

Reinforcement learning, pioneered by Richard Sutton, a professor of computing science at the University of Alberta, and Andrew Barto, Professor Emeritus at the University of Massachusetts, mimics human trial-and-error learning. The AI develops “value functions” that predict the long-term rewards of its actions.  Together, these technologies enable machines to recognize patterns, make predictions and generate text that feels strikingly human — yet beneath this technical progress lies the very divide that cuts to the heart of how intelligence itself is defined.

This placement works well because it elaborates on the technical foundations after the article introduces the basic concept of how language models work, and before it transitions to discussing the emergent behaviors and the “black box problem.”

Yet at scale, that simple process begins to yield emergent behavior — reasoning, problem-solving, even flashes of creativity that surprise their creators. The result is something that looks, sounds and increasingly acts intelligent — even if no one can explain exactly why.

That opacity worries not just philosophers, but engineers. The “black box problem” — our inability to interpret how neural networks make decisions — has turned into a scientific and safety concern. If we can’t explain a model’s reasoning, can we trust it in critical systems like healthcare or defense?

Companies like Anthropic are trying to address that with “constitutional AI,” embedding human-written principles into model training to guide behavior. Others, like OpenAI, are experimenting with internal oversight teams and adversarial testing to catch dangerous or misleading outputs. But no approach yet offers real transparency. We’re effectively steering a ship whose navigation system we don’t fully understand.  “We need governance frameworks that evolve as quickly as AI itself,” says Felix Cheung, Founding Chairman of RegTech Association of Hong Kong (RTAHK). “Technical safeguards alone aren't enough — transparent monitoring and clear accountability must become industry standards.”

Meanwhile, the commercial race is accelerating. Venture capital is flowing into AI startups at record speed. OpenAI’s valuation reportedly exceeds US$150 billion; Anthropic, backed by Amazon and Google, isn’t far behind.   The bet is simple: that generative AI will become as indispensable to modern life as the internet itself.

And yet, not everyone is buying into that vision. The open-source movement — championed by players like Meta’s Llama, Mistral in France, and a fast-growing constellation of independent labs — argues that democratizing access is the only way to ensure both innovation and accountability.   If powerful AI remains locked behind corporate walls, they warn, progress will narrow to the priorities of a few firms.

But openness cuts both ways. Publicly available models are harder to police, and their misuse — from disinformation to deepfakes — grows as easily as innovation does. Regulators are scrambling to balance risk and reward. The European Union’s AI Act is the world’s most comprehensive attempt at governance, but even it struggles to define where to draw the line between creativity and control.

This isn’t just a scientific argument anymore. It’s a geopolitical one. The United States, China, and Europe are each pursuing distinct AI strategies: Washington betting on private-sector dominance, Beijing on state-led scaling, Brussels on regulation and ethics. Behind the headlines, compute power is becoming a form of soft power. Whoever controls access to the chips, data, and infrastructure that fuel AI will control much of the digital economy.  

That reality is forcing some uncomfortable math. Training frontier models already consumes energy on the scale of small nations. Data centers now rise next to hydroelectric dams and nuclear plants. Efficiency — once a technical concern — has become an economic and environmental one. As demand grows, so does the incentive to build smaller, smarter, more efficient systems. The industry’s next leap may not come from scale at all, but from constraint.

For all the noise, one truth keeps resurfacing: large language models are tools, not oracles. Their intelligence — if we can call it that — is borrowed from ours. They are trained on human text, human logic, human error. Every time a model surprises us with insight, it is, in a sense, holding up a mirror to collective intelligence.

That’s what makes this schism so fascinating. It’s not really about machines. It’s about what we believe intelligence is — pattern or principle, simulation or soul. For believers like Bengio, intelligence may simply be prediction done right. For critics like Marcus, that’s a category mistake: true understanding requires grounding in the real world, something no model trained on text can ever achieve.

The public, meanwhile, is less interested in metaphysics. To most users, these systems work — and that’s enough. They write emails, plan trips, debug spreadsheets, summarize meetings. Whether they “understand” or not feels academic. But for the scientists, that distinction remains critical, because it determines where AI might ultimately lead.

Even inside the companies building them, that tension shows OpenAI’s Sam Altman has hinted that scaling can’t continue forever. At some point, new architectures — possibly combining logic, memory, or embodied data — will be needed. DeepMind’s Demis Hassabis says something similar: intelligence, he argues, will come not just from prediction, but from interaction with the world.  

It’s possible both are right. The future of AI may belong to hybrid systems — part statistical, part symbolic — that can reason across multiple modes of information: text, image, sound, action. The line between model and agent is already blurring, as LLMs gain the ability to browse the web, run code, and call external tools. The next generation won’t just answer questions; it will perform tasks.

For startups, the opportunity — and the risk — lies in that transition. The most valuable companies in this new era may not be those that build the biggest models, but those that build useful ones: specialized systems tuned for medicine, law, logistics, or finance, where reliability matters more than raw capability. The winners will understand that scale is a means, not an end.

And for society, the challenge is to decide what kind of intelligence we want to live with. If we treat these models as collaborators — imperfect, explainable, constrained — they could amplify human potential on a scale unseen since the printing press. If we chase the illusion of autonomy, they could just as easily entrench bias, confusion, and dependency.

The debate over large language models will not end in a lab. It will play out in courts, classrooms, boardrooms, and living rooms — anywhere humans and machines learn to share the same cognitive space. Whether we call that cooperation or competition will depend on how we design, deploy, and, ultimately, define these tools.

Perhaps Hinton’s offhand remark about being psychoanalyzed by his own creation wasn’t just a joke. It was an omen. AI is no longer something we use; it’s something we’re reflected in. Every model trained on our words becomes a record of who we are — our reasoning, our prejudices, our brilliance, our contradictions. The schism among scientists mirrors the one within ourselves: fascination colliding with fear, ambition tempered by doubt.

In the end, the question isn’t whether LLMs are the future. It’s whether we are ready for a future built in their image.

Keep Reading

Scaling & Growth

Why The Body Shop Thrives in India but Struggles in the US — Lessons for Startups

From driving social change to making luxury affordable — Lessons from The Body Shop India

Updated

January 16, 2026 12:00 PM

The Body Shop's storefront. PHOTO: ADOBE STOCK

The Body Shop, known worldwide for its ethical values and cruelty-free beauty products, has had very different results in two of its major markets. In the United States, challenges such as shifting retail trends and tougher competition led to the closure of most physical stores in early 2024. Meanwhile, in India, The Body Shop has risen to become one of its top five global markets. After reaching customers in more than 1,500 Indian cities through its omnichannel network, the company now plans to double its 200-store footprint over the next three to five years.  

So what did The Body Shop do in India that proved harder to pull off in the U.S.? Below, we break down why The Body Shop struggled in the U.S., what’s driving The Body Shop India’s growth and what startup founders can learn from the contrast.

The decline of The Body Shop in the US: Reasons behind the fall

In March 2024, The Body Shop’s U.S. unit filed for Chapter 7 bankruptcy and stopped operating its roughly 50 stores. That move effectively ended its brick-and-mortar presence in the country.

A big part of the story is that the U.S. beauty market moved faster than The Body Shop did. Prestige beauty kept growing, and shoppers increasingly gravitated to retailers and brands that feel current and have a strong online presence. Paul Dodd, Chief Innovation Officer at e-commerce fulfilment partner Huboo, have pointed to The Body Shop’s slow approach to digital growth as a major factor behind its decline. With U.S. prestige beauty sales reaching about US$33.9 billion in 2024 and growing at 7% year over year, the demand is clearly there. The brands that stand out and get rewarded were the ones that matched how people now discover products and buy them.  

The company also leaned too heavily on stores at a time when stores were getting harder to run. When foot traffic drops and rents rise, the pressure shows up quickly. Shoppers also had more places to go, including Sephora, Ulta, Amazon and direct-to-consumer sites. A similar pattern played out in Canada, where restructuring included store closures and halted e-commerce. It was another sign that North America had become an operational headache, not just a marketing challenge.

Then there’s the branding issue: its “ethical pioneer” position simply stopped being a moat in the U.S. market. Today, cruelty-free and vegan claims are now table stakes across many newer brands, and “clean beauty” messaging is everywhere. “Initially, the purpose-driven brand was revolutionary, so much so that competitors like Drunk Elephant have adopted a similar ethos,” says Dan Hocking, Chief Operating Officer at advertising agency TroubleMaker. “It was a concept that rightly earned success in the 80s and 90s, but The Body Shop didn’t adapt to changing consumer habits and preferences”. Meanwhile, competitors like Lush have kept people talking through stronger creator/influencer marketing, faster product cycles and more immersive in-store experiences.  

Internal disruption likely made the turnaround even harder. Reporting on the U.S. bankruptcy points to instability, including the U.S. unit saying it did not have advance notice of decisions tied to the U.K. parent’s restructuring. When leadership decisions land without warning, it becomes harder to plan inventory, fund marketing and commit to a clear digital roadmap.

How The Body Shop got its game right in India  

1. Expansion into tier 2 and 3 cities

For years, India’s beauty industry focused mainly on metropolitan cities. Today, however, increasing internet penetration, rising disposable incomes, exposure to global beauty trends and an appetite for ethical, sustainable brands have fuelled demand in smaller towns. That tailwind matters because India’s beauty and personal care market is expected to reach a gross merchandise value (GMV) of US$30 billion by 2027 and is projected to grow at roughly an 10% CAGR. There’s plenty of room for both premium and “affordable luxury” players that can meet consumer where they are.  

The Body Shop has leaned into this shift. Harmeet Singh, Chief Brand Officer of The Body Shop Asia South, has said the brand is expanding into Tier 2 and Tier 3 cities with a focus on central and Northeast India. Reports also point to a clear advantage here: more than 200 stores across dozens of cities, plus online reach into over 1,500 cities. That foundation makes non-metro expansion feel like the next move, not a risky leap.

2. Omni-channel retail strategy for beauty shoppers

Unlike its U.S. front, The Body Shop India has put effort into digital and distribution. Besides its own online store, customers can find the brand on big beauty and retail platforms like Nykaa, Amazon, Flipkart, Tatacliq and Myntra. It has also built more direct routes to purchase through WhatsApp, social commerce, expert chats and live video consultations. For even faster access, it’s on quick-commerce apps like Blinkit and Swiggy.  

This strategy is already showing up in the numbers. Nearly 30% of The Body Shop India’s business came from digital channels as of June 2025. Rahul Shanker, Chief Executive of The Body Shop India, has said the brand wants to lift online revenue to 45–50% of total sales by 2030.

This approach lines up with what’s happening in the market. NielsenIQ data found beauty e-commerce and quick-commerce sales in India rose 39% in value between June and November 2024, with offline growth over the same period being just 3%. The logic is simple: if the market is moving online, you want to be easy to buy online.

3. Inclusivity, accessibility and social impact

The Body Shop’s people-first approach shows up not just in its marketing, but in how it runs the business day to day. Inside the company, it has pushed gender sensitivity across teams. Out of 600 employees, it has 10 staff members who are part of the LGBTQA+ spectrum.  

In stores, the brand has worked on improving accessibility. In 2024, The Body Shop India launched a Braille initiative for visually impaired customers. The programme introduced Braille category callouts in select locations so shoppers can navigate more independently.

On the sustainability side, the brand ties its message to its supply chain. An example is its long-term partnership with Plastics for Change, a Bengaluru-based social enterprise, to source “Community Fair Trade” recycled plastic for packaging. The collaboration has resulted in more predictable income, safer work and better access to social services and housing and education projects for the waste picker communities, which often include marginalized groups and women.

The same intent can also be seen in its physical retail. The Body Shop India has been converting stores into its “Activist Workshop” format, where everything is made from recycled materials, including store fixtures and interiors. By mid-2024, it had around 20 Activist Workshop stores in India.

4. Pricing that fits the Indian beauty market

In April 2025, The Body Shop India launched its “More Love for Less” campaign to make products more accessible. Through the campaign, the company lowered the prices of more than 60 best-sellers by 28–30%. The goal was to remove a clear barrier for many shoppers while maintaining the same quality.  

The company has also positioned this as a pricing reset, not a short-term discount push. It’s meant to widen the funnel, especially among younger consumers aged 18–25, where price has been a major hurdle. That matters even more as the brand expands deeper into Tier 2 and Tier 3 cities, where value is often front and centre.

5. Local marketing that feels made for India

The Body Shop India has leaned into localized marketing in a way that feels specific, not generic. In late 2024, it launched “The India Edit”, a collection inspired by native ingredients like lotus, hibiscus, pomegranate and black grape. The tagline, “Only in India, for You,” makes the intent clear: India is not a copy-paste market. This approach matters because India is one of the most competitive beauty battlegrounds right now, with ongoing entry from global beauty brands. When everyone is fighting for attention, local storytelling helps The Body Shop stand out and feel closer to the customer.  

Lessons for startup leaders from The Body Shop India  
  • A global playbook rarely works as-is. Brands grow faster when they understand local buying habits, price sensitivity and culture. The Body Shop India’s product customization, pricing moves and city expansion strategies have shown what that looks like in practice.  
  • Omnichannel strategy matters more than ever in today’s market. Combining retail stores with a strong digital presence makes a brand easier to find and easier to buy, even when shopping habits change.  
  • Tier 2 and Tier 3 cities often hold untapped potential. Competition is often lower, demand is rising and the brands that arrive early can build loyalty faster.  
  • Local supply chains can also help. They can cut costs, speed up delivery and fit the preference many shoppers have for locally relevant products.
  • Marketing needs to match the market. Campaigns that reflect local values and moments build stronger loyalty and help brands stand out in crowded categories like beauty and personal care.
Wrapping up  

The Body Shop’s story in the U.S. and India shows how differently a global beauty brand can perform depending on local strategy. In the U.S., it ran into a tough mix of fast-changing consumer habits, heavy competition and a liquidation process that left little room to rebuild. In India, the brand is riding big tailwinds in beauty retail growth, plus the shift to e-commerce and quick commerce. It has also put real effort into localization, pricing and omnichannel distribution.  

If you’re trying to scale a consumer brand, there’s a clear takeaway here. Understand how your market shops, build strong digital distribution and make the brand feel local. The Body Shop India’s playbook is a useful example of how to do it.