A global survey shows robot anxiety drops when people encounter robots in real life
Updated
March 13, 2026 2:25 PM

Ameca the humanoid robot, featuring a grey rubber face. PHOTO: ADOBE STOCK
People often assume robots make people uneasy everywhere. But a new global study suggests something more nuanced. Robot anxiety tends to be highest in places where people rarely see robots in real life. Where robots are more visible, attitudes are often far more positive. That insight comes from a global study by Hexagon AB, which surveyed 18,000 participants across nine major markets. The research explored how adults and children think about robots and how those views change depending on everyday exposure.
In the United Kingdom, anxiety about robots is the highest among the countries studied. Around 52% of adults say they feel worried that something might go wrong when they think about interacting with or working alongside robots. South Korea sits at the other end of the spectrum, with only 29% reporting similar concerns. One factor appears to explain much of the gap: familiarity.
British adults are among the least likely to have encountered robots in real life. Only about 30% say they have seen or used one. In contrast, countries where robots are more visible tend to report greater comfort. China offers the clearest example. Around 75% of adults there say they have seen or interacted with robots. At the same time, 81% say they feel excited about the technology’s future potential.
The study suggests that attitudes toward robots are not fixed. Instead, they shift depending on where people encounter them and what tasks they perform. When robots are seen solving clear, practical problems, confidence tends to rise.
Across the surveyed countries, adults report the highest comfort levels with robots working in factories and warehouses. Around 63% say they are comfortable with robots in those environments. These are settings where tasks are clearly defined and safety standards are well understood. Acceptance drops in more personal spaces. Only 46% say they feel comfortable with robots in the home, while comfort falls further to 39% when robots are imagined in classrooms.
In other words, context matters. People appear more willing to accept robots when they take on physically demanding or dangerous work. Half of the respondents say improved safety is one of the main advantages of robotics in those environments. A similar share point to productivity gains as another benefit. Another finding challenges a common assumption about public fears. Job loss is often described as the biggest concern surrounding robotics. But the study suggests security risk worries people more.
Around 51% of adults say their biggest concern about robots at work is the possibility that the machines could be hacked or misused. That fear outweighs worries about physical malfunction or injury, which stand at 41%. Concerns about being replaced at work appear at the same level.
For many respondents, the issue is not simply whether robots can perform tasks. It is whether the systems controlling them are secure. According to researchers involved in the study, these concerns reflect how people evaluate emerging technologies. Instead of having a single opinion about robotics, people tend to judge each situation individually.
A robot helping assemble products in a factory may feel acceptable. The same technology operating in more sensitive environments can raise different questions. Dr. Jim Everett, an associate professor in moral psychology, says trust in artificial intelligence and robotics is often misunderstood. People are not simply asking whether they trust the technology, he notes. They are thinking about specific tools performing specific roles.
A robot assisting in a classroom or helping in healthcare carries different expectations than an AI system used in defense or surveillance. Even though these technologies are often grouped together in public debates, people evaluate them differently depending on their purpose.
Finally, the study also highlights another important factor shaping public attitudes: experience. When people actually encounter robots, fear often declines. Michael Szollosy, a robotics researcher involved in the project, says reactions tend to change quickly when individuals meet a robot for the first time.
The idea of an autonomous machine can feel intimidating in theory. But when people see a small service robot or an industrial machine performing a straightforward task, the reaction is often much calmer. Exposure can shift perceptions from abstract fears to practical understanding.
That shift matters because robotics is moving steadily into everyday environments. From manufacturing and logistics to healthcare and public services, machines capable of autonomous or semi-autonomous work are becoming more common.
As that happens, the study suggests public confidence may depend less on technical breakthroughs and more on visibility and transparency. Burkhard Boeckem, chief technology officer at Hexagon AB, argues that trust grows when people understand what robots are designed to do and where their limits lie.
Anxiety tends to increase when systems feel invisible or poorly understood. Clear boundaries and clear explanations can have the opposite effect. When people see robots working safely alongside humans, performing well-defined tasks and operating within clear rules, the technology becomes easier to accept.
In that sense, the future of robotics may depend as much on public familiarity as on engineering. The machines themselves are advancing quickly. But the relationship between humans and robots is still being negotiated. For now, the study offers a simple insight: the more people encounter robots in everyday life, the less mysterious they become. And once the mystery fades, the conversation often changes from fear to curiosity.
Keep Reading
A step forward that could influence how smart contracts are designed and verified.
Updated
January 8, 2026 6:32 PM

ChainGPT's robot mascot. IMAGE: CHAINGPT
A new collaboration between ChainGPT, an AI company specialising in blockchain development tools and Secret Network, a privacy-focused blockchain platform, is redefining how developers can safely build smart contracts with artificial intelligence. Together, they’ve achieved a major industry first: an AI model trained exclusively to write and audit Solidity code is now running inside a Trusted Execution Environment (TEE). For the blockchain ecosystem, this marks a turning point in how AI, privacy and on-chain development can work together.
For years, smart-contract developers have faced a trade-off. AI assistants could speed up coding and security reviews, but only if developers uploaded their most sensitive source code to external servers. That meant exposing intellectual property, confidential logic and even potential vulnerabilities. In an industry where trust is everything, this risk held many teams back from using AI at all.
ChainGPT’s Solidity-LLM aims to solve that problem. It is a specialised large language model trained on over 650,000 curated Solidity contracts, giving it a deep understanding of how real smart contracts are structured, optimised and secured. And now, by running inside SecretVM, the Confidential Virtual Machine that powers Secret Network’s encrypted compute layer, the model can assist developers without ever revealing their code to outside parties.
“Confidential computing is no longer an abstract concept,” said Luke Bowman, COO of the Secret Network Foundation. “We've shown that you can run a complex AI model, purpose-built for Solidity, inside a fully encrypted environment and that every inference can be verified on-chain. This is a real milestone for both privacy and decentralised infrastructure”.
SecretVM makes this workflow possible by using hardware-backed encryption to protect all data while computations take place. Developers don’t interact with the underlying hardware or cryptography. Instead, they simply work inside a private, sealed environment where their code stays invisible to everyone except them—even node operators. For the first time, developers can generate, test and analyse smart contracts with AI while keeping every detail confidential.
This shift opens new possibilities for the broader blockchain community. Developers gain a private coding partner that can streamline contract logic or catch vulnerabilities without risking leaks. Auditors can rely on AI-assisted analysis while keeping sensitive audit material protected. Enterprises working in finance, healthcare or governance finally have a path to adopt AI-driven blockchain automation without raising compliance concerns. Even decentralised organisations can run smart-contract agents that make decisions privately, without exposing internal logic on a public chain.
The system also supports secure model training and fine-tuning on encrypted datasets. This enables collaborative AI development without forcing anyone to share raw data—a meaningful step toward decentralised and privacy-preserving AI at scale.
By combining specialised AI with confidential computing, ChainGPT and Secret Network are shifting the trust model of on-chain development. Instead of relying on centralised cloud AI services, developers now have a verifiable, encrypted environment where they keep full control of their code, their data and their workflow. It’s a practical solution to one of blockchain’s biggest challenges: using powerful AI tools without sacrificing privacy.
As the technology evolves, the roadmap includes confidential model fine-tuning, multi-agent AI systems and cross-chain use cases. But the core advancement is already clear: developers now have a way to use AI for smart contract development that is fast, private and verifiable—without compromising the security standards that decentralised systems rely on.