The hyperscale model is intended to support semiconductors, infrastructure, and services. It underpins Korea’s ambition to compete with the U.S. and China in AI
SK Telecom has unveiled A.X K1, a hyperscale artificial intelligence model with 519 billion parameters, marking the first time a Korean-developed AI system has crossed the 500B-parameter threshold. The announcement places South Korea among a small group of countries — alongside the United States and China — that can build and operate AI models at this scale.
According to SK Telecom, the model is not designed as a single consumer-facing product. Instead, it is intended to serve as a foundational layer for a broader national AI ecosystem, spanning semiconductors, data centers, models, and services.
Consortium-led development model
A.X K1 was developed through a multi-party consortium that includes Krafton, 42dot, Rebellions, Liner, SelectStar, Seoul National University, and KAIST.
Each participant contributed a specific capability across the AI value chain, ranging from data construction and validation to on-device AI, large-scale model training, and domestically developed neural processing units (NPUs). SK Telecom said this structure was intended to reduce dependence on foreign AI platforms and hardware.
Why scale matters at 500B parameters
Industry analysts note that models above the 500B-parameter range tend to show more stable performance in areas such as:
- complex mathematical and logical reasoning
- multilingual understanding
- multi-step agent-based execution
These traits make such models suitable for use as national or industrial AI infrastructure, rather than for narrow or task-specific deployments.
SK Telecom said A.X K1 is designed to support workloads that require consistent performance across languages and domains, rather than being optimized for a single benchmark or application.
Positioned as a “teacher model”
Unlike many large language models that are primarily designed for direct interaction, A.X K1 is positioned as a teacher model. The system is intended to transfer knowledge to smaller and more specialized models, particularly those below the 70B-parameter range.
The consortium plans to expand research into knowledge distillation, allowing domestic developers and companies to build lighter, task-focused models without needing to train hyperscale systems themselves. SK Telecom described this approach as a way to lower entry barriers while maintaining access to advanced AI capabilities.
Public access and early deployment plans
SK Telecom said accessibility will be a key test of the model’s value. A.X K1 is expected to be integrated into existing services such as A. (A-DoT), which already has more than 10 million users nationwide. The company said this would allow the public to access advanced AI through familiar channels such as calls, messaging, web interfaces, and mobile apps.
The model is also expected to support industrial use cases under SK Telecom’s AIX strategy, including manufacturing support tools, real-time character interaction in games, and future applications such as humanoid robotics.
Strategic implications for semiconductors and infrastructure
Beyond software, A.X K1 is being used as a validation platform for Korea’s semiconductor industry. Models operating at the 500B scale generate non-standard workloads that stress memory bandwidth and inter-GPU communication — two major bottlenecks in high-performance AI systems.
SK Telecom said testing at this scale is necessary to evaluate whether domestically developed AI chips and infrastructure can compete with global alternatives in real-world conditions.
Why it matters
- National AI sovereignty
Models at the 500B-parameter scale are increasingly viewed as strategic infrastructure. By developing A.X K1 domestically, Korea reduces reliance on U.S. and Chinese foundation models for sensitive, large-scale AI workloads. - Lower barriers for domestic developers
Positioning A.X K1 as a teacher model allows smaller Korean firms and research groups to build competitive AI systems without bearing the cost of hyperscale training. - Semiconductor and infrastructure validation
Running AI workloads at this scale provides a real-world test for Korea’s AI chips, memory, and interconnect technologies — an area where performance gaps can determine global competitiveness. - Shift from products to platforms
The move signals a transition from isolated AI services toward AI as national infrastructure, similar to how telecom networks or cloud platforms are treated.
Risks and Limitations
Compute and operating costs
Training and operating a 500B-scale model requires sustained access to massive compute resources, high-bandwidth memory, and energy. Even with domestic infrastructure, long-term cost efficiency remains uncertain, particularly as global competitors continue to scale beyond current model sizes.
Governance and control challenges
As A.X K1 is positioned as shared digital infrastructure and released in open-source form, questions remain around:
- model governance and accountability
- data disclosure boundaries
- misuse prevention and security oversight
Clear governance frameworks will be necessary if the model is widely adopted across public and private sectors.
Global competitive pressure
While A.X K1 places Korea in the hyperscale AI category, global leaders are moving rapidly toward even larger and more specialized models. Maintaining relevance will depend not only on model size, but on ecosystem adoption, continuous training, and integration into real-world services.
Open-source plans and next steps
More than 20 institutions, including SK Group affiliates and research foundations, have submitted letters of intent to participate in real-world testing and validation of A.X K1. The consortium plans to release the model as open source and provide APIs to developers, alongside partial disclosure of training data and an integrated support framework for model development.
“This marks a new inflection point in Korea’s effort to become one of the world’s top AI nations,” said Kim Tae-yoon, head of SK Telecom’s Foundation Model Office. “Our focus is not just on performance, but on making advanced AI usable across society.”






