New infrastructure aims to cut AI training time sharply as Korea races to scale domestic AI capabilities
Naver said on Thursday it has completed construction of South Korea’s largest artificial intelligence computing cluster, built around 4,000 of Nvidia’s next-generation B200 Blackwell graphics processing units. The company said the system significantly expands its AI computing capacity and brings its infrastructure closer to the scale used by leading global technology firms.
According to Naver, the “B200 4K Cluster” delivers computing power comparable to supercomputers ranked among the world’s top 500. The cluster will be used to accelerate development of the company’s proprietary foundation models and support broader AI deployment across its services.
Why the cluster matters
Naver said internal simulations show the new infrastructure could speed up AI model development by around twelve times. Its research team reported that training a 72-billion-parameter model—previously requiring about 18 months on an Nvidia A100-based system with 2,048 GPUs—can now be completed in roughly six weeks.
While actual training times may vary depending on workloads and configurations, the company said the improvement allows more frequent experimentation and faster iteration cycles, which are increasingly critical in large-scale AI development.
Focus on foundation and multimodal models
The cluster is expected to play a central role in advancing Naver’s in-house foundation models, including its omni model that can process text, images, video, and audio within a single system. Naver said it plans to expand large-scale training of these multimodal models with the aim of achieving performance levels comparable to global peers, before rolling them out across its platforms and partner industries.
Industry observers note that such infrastructure is becoming a baseline requirement for companies seeking to compete in foundation model development, where access to massive, stable computing resources often determines development speed and model quality.
Engineering choices behind the performance gains
Naver attributed the performance gains to large-scale parallel processing combined with high-speed networking, as well as improvements in cooling and power management. The company said the cluster design draws on its experience operating high-performance GPU systems, including its early commercial deployment of Nvidia’s SuperPod infrastructure in 2019.
By directly designing and operating its own large clusters, Naver said it has been able to optimize system efficiency beyond standard off-the-shelf configurations.
Strategic framing around AI sovereignty
Choi Soo-yeon, CEO of Naver, framed the investment as part of a broader national and strategic effort rather than a standalone technology upgrade.
“This infrastructure secures a core asset that supports national AI competitiveness and self-reliance,” she said. “With an environment that enables rapid learning and repeated experimentation, we can apply AI technologies more flexibly across services and industrial fields.”
Her comments reflect a growing emphasis among South Korean technology firms on building domestic AI capabilities amid intensifying global competition and rising dependence on advanced computing resources.
Data center expansion and future capacity
Naver has not disclosed the specific data center hosting the new cluster, saying only that it is housed at a leased facility in Seoul to allow faster scaling. Separately, the company has announced plans to expand its data center footprint, including a major project in Sejong aimed at reaching 270 megawatts of capacity.
The expansion suggests that the B200 cluster is likely part of a longer-term infrastructure roadmap rather than a one-off deployment.
Part of a broader AI infrastructure push
The latest build adds to Naver’s wider push to scale AI computing across South Korea. In October 2025, the company said it would deploy around 60,000 Nvidia GPUs through partnerships with LG AI Research, SK Telecom, NC AI, and Upstage.
Taken together, these moves point to an intensifying race among Korean technology firms to secure large-scale computing resources, as access to advanced AI infrastructure becomes a defining factor in long-term competitiveness in foundation models and applied AI services.






