New foundation model highlights Korea’s push for sovereign, cost-efficient AI systems
LG AI Research on Tuesday unveiled K-EXAONE, a large-scale artificial intelligence model with 236 billion parameters, positioning it as a core asset in South Korea’s effort to build sovereign foundation models. The announcement underscores a broader policy and industry push to reduce reliance on foreign AI platforms while remaining competitive with leading systems from the United States and China.
The model was introduced at the country’s first sovereign AI foundation model briefing hosted by the Ministry of Science and ICT at COEX in southern Seoul. LG said the event marked a shift from discussing AI potential to presenting measurable performance and deployment readiness.
Benchmark results and global comparison
LG AI Research said internal benchmarking showed K-EXAONE outperforming similarly sized open-weight models from global players. The company reported an average score of 72.03, compared with 69.37 for Alibaba’s Qwen3 235B and 69.79 for OpenAI’s GPT-OSS 120B.
According to Artificial Analysis’ Intelligence Index, both rival models rank among the top open-weight systems worldwide. LG said surpassing them places K-EXAONE near the top tier of global foundation models, at least within its parameter range.
Efficiency as a design priority
Beyond raw scores, LG emphasized efficiency gains as a defining feature. Compared with EXAONE 4.0, released in July, K-EXAONE reduces memory usage and computational load by about 70%, while improving inference speed. These improvements address one of the main challenges facing large AI models: high operating costs.
LG said the focus on efficiency reflects practical constraints in enterprise and public-sector deployments, where compute availability and budgets remain limited despite growing demand for advanced AI.
Architectural choices behind the gains
The performance and cost improvements stem from changes to the model’s internal structure. LG said K-EXAONE adopts:
- A mixture-of-experts architecture, activating only relevant model components per task
- Hybrid-attention mechanisms to reduce unnecessary computation
- Optimized inference pathways to limit memory overhead
Together, these design choices allow the model to process requests faster while using fewer resources, according to the company.
Lowering infrastructure barriers
A notable aspect of K-EXAONE is its ability to run on older GPU environments, including NVIDIA’s A100 chips, rather than requiring the latest high-end hardware. LG said this significantly lowers deployment and operating costs.
This approach could make advanced foundation models more accessible to startups and small and medium-sized enterprises, which often lack the capital to invest in cutting-edge AI infrastructure.
Development speed and future direction
LG AI Research said it completed development of K-EXAONE in about five months, highlighting a rapid development cycle amid intensifying global competition. The company said its next goal is to build models with trillions of parameters, moving closer to the scale of the most advanced global systems.
An LG official said the project was guided by a clear benchmark target from the outset. “K-EXAONE achieved the goal of delivering performance exceeding the latest global AI models,” the official said, adding that LG plans to continue advancing its proprietary technologies.
Outlook: performance, cost, and sovereignty
K-EXAONE’s unveiling illustrates how South Korea’s AI strategy is evolving. Rather than focusing only on size, LG’s approach emphasizes a balance between performance, efficiency, and deployability. Whether this combination can translate into broad adoption will depend on real-world use cases and sustained investment.
Still, the model’s debut signals that Korean companies are aiming to compete not just in applications, but at the foundation-model level—a critical layer in shaping long-term AI competitiveness and technological independence.






