Artificial Intelligence

SK Telecom Unveils A.X K1: Why Korea’s First 500B-Scale Sovereign AI Model Matters

How Korea is trying to take control of its AI future.

Updated

January 13, 2026 10:56 AM

SK Telecom Headquarters in Seoul, South Korea. PHOTO: ADOBE STOCK

SK Telecom, South Korea’s largest mobile operator, has unveiled A.X K1, a hyperscale artificial intelligence model with 519 billion parameters. The model sits at the center of a government-backed effort to build advanced AI systems and domestic AI infrastructure within Korea. This comes at a time when companies in the United States and China largely dominate the development of the most powerful large language models.

Rather than framing A.X K1 as just another large language model, SK Telecom is positioning it as part of a broader push to build sovereign AI capacity from the ground up. The model is being developed as part of the Korean government’s Sovereign AI Foundation Model project, which aims to ensure that core AI systems are built, trained and operated within the country. In simple terms, the initiative focuses on reducing reliance on foreign AI platforms and cloud-based AI infrastructure, while giving Korea more control over how artificial intelligence is developed and deployed at scale.

One of the gaps this approach is trying to address is how AI knowledge flows across a national ecosystem. Today, the most powerful AI foundation models are often closed, expensive and concentrated within a small number of global technology companies. A.X K1 is designed to function as a “teacher model,” meaning it can transfer its capabilities to smaller, more specialized AI systems. This allows developers, enterprises and public institutions to build tailored AI tools without starting from scratch or depending entirely on overseas AI providers.

That distinction matters because most real-world applications of artificial intelligence do not require massive models operating independently. They require focused, reliable AI systems designed for specific use cases such as customer service, enterprise search, manufacturing automation or mobility. By anchoring those systems to a large, domestically developed foundation model, SK Telecom and its partners are aiming to create a more resilient and self-sustaining AI ecosystem.

The effort also reflects a shift in how AI is being positioned for everyday use. SK Telecom plans to connect A.X K1 to services that already reach millions of users, including its AI assistant platform A., which operates across phone calls, messaging, web services and mobile applications. The broader goal is to make advanced AI feel less like a distant research asset and more like an embedded digital infrastructure that supports daily interactions.

This approach extends beyond consumer-facing services. Members of the SKT consortium are testing how the hyperscale AI model can support industrial and enterprise applications, including manufacturing systems, game development, robotics and autonomous technologies. The underlying logic is that national competitiveness in artificial intelligence now depends not only on model performance, but on whether those models can be deployed, adapted and validated in real-world environments.

There is also a hardware dimension to the project. Operating an AI model at the 500-billion-parameter scale places heavy demands on computing infrastructure, particularly memory performance and communication between processors. A.X K1 is being used to test and validate Korea’s semiconductor and AI chip capabilities under real workloads, linking large-scale AI software development directly to domestic semiconductor innovation.

The initiative brings together technology companies, universities and research institutions, including Krafton, KAIST and Seoul National University. Each contributes specialized expertise ranging from data validation and multimodal AI research to system scalability. More than 20 institutions have already expressed interest in testing and deploying the model, reinforcing the idea that A.X K1 is being treated as shared national AI infrastructure rather than a closed commercial product.

Looking ahead, SK Telecom plans to release A.X K1 as open-source AI software, alongside APIs and portions of the training data. If fully implemented, the move could lower barriers for developers, startups and researchers across Korea’s AI ecosystem, enabling them to build on top of a large-scale foundation model without incurring the cost and complexity of developing one independently.

Keep Reading

Artificial Intelligence

Neuron7’s Neuro Brings a New Kind of Intelligence — One That Refuses to Guess

Examining the shift from fast answers to verified intelligence in enterprise AI.

Updated

January 8, 2026 6:33 PM

Startup employee reviewing business metrics on an AI-powered dashboard. PHOTO: FREEPIK

Neuron7.ai, a company that builds AI systems to help service teams resolve technical issues faster, has launched Neuro. It is a new kind of AI agent built for environments where accuracy matters more than speed. From manufacturing floors to hospital equipment rooms, Neuro is designed for situations where a wrong answer can halt operations.

What sets Neuro apart is its focus on reliability. Instead of relying solely on large language models that often produce confident but inaccurate responses, Neuro combines deterministic AI — which draws on verified, trusted data — with autonomous reasoning for more complex cases. This hybrid design helps the system provide context-aware resolutions without inventing answers or “hallucinating”, a common issue that has made many enterprises cautious about adopting agentic AI.

“Enterprise adoption of agentic AI has stalled despite massive vendor investment. Gartner predicts 40% of projects will be canceled by 2027 due to reliability concerns”, said Niken Patel, CEO and Co-Founder of Neuron7. “The root cause is hallucinations. In service operations, outcomes are binary. An issue is either resolved or it is not. Probabilistic AI that is right only 70% of the time fails 30% of your customers and that failure rate is unacceptable for mission-critical service”.

That concern shaped how Neuro was built. “We use deterministic guided fixes for known issues. No guessing, no hallucinations — and reserve autonomous AI reasoning for complex scenarios. What sets Neuro apart is knowing which mode to use. While competitors race to make agents more autonomous, we're focused on making service resolution more accurate and trusted”, Patel explained.

At the heart of Neuro is the Smart Resolution Hub, Neuron7’s central intelligence layer that consolidates service data, knowledge bases and troubleshooting workflows into one conversational experience. This means a technician can describe a problem — say, a diagnostic error in an MRI scanner — and Neuro can instantly generate a verified, step-by-step solution. If the problem hasn’t been encountered before, it can autonomously scan through thousands of internal and external data points to identify the most likely fix, all while maintaining traceability and compliance.

Neuro’s architecture also makes it practical for real-world use. It integrates seamlessly with enterprise systems such as Salesforce, Microsoft, ServiceNow and SAP, allowing companies to embed it within their existing support operations. Early users of Neuron7’s platform have reported measurable improvements — faster resolutions, higher customer satisfaction and reduced downtime — thanks to guided intelligence that scales expert-level problem solving across teams.

The timing of Neuro’s debut feels deliberate. As organizations look to move past the hype of generative AI, trust and accountability have become the new benchmarks. AI systems that can explain their reasoning and stay within verifiable boundaries are emerging as the next phase of enterprise adoption.

“The market has figured out how to build autonomous agents”, Patel said. “The unsolved problem is building accurate agents for contexts where errors have consequences. Neuro fills that gap”.

Neuron7 is building a system that knows its limits — one that reasons carefully, acts responsibly and earns trust where it matters most. In a space dominated by speculation, that discipline may well redefine what “intelligent” really means in enterprise AI.