The homegrown artificial intelligence (AI) foundational model, K-Exaone, developed by LG AI Research, has entered the global top 10, ranking seventh. The feat meansThe homegrown artificial intelligence (AI) foundational model, K-Exaone, developed by LG AI Research, has entered the global top 10, ranking seventh. The feat means

LG’s K-Exaone breaks into the world’s top 10 AI models

The homegrown artificial intelligence (AI) foundational model, K-Exaone, developed by LG AI Research, has entered the global top 10, ranking seventh. The feat means the model is the only Korean presence in a ranking dominated by models developed by companies in the United States and China.

In a statement, LG mentioned that its latest AI model delivered the strongest performance among five teams in a government-led AI foundational model competition. The model achieved remarkable feats, topping 10 of 13 benchmark tests with an average score of 72. Internationally, the AI model ranked seventh on the Intelligence Index compiled by Artificial Analysis, making it the only Korean model to enter the top 10. China led with six models, while the US boasted three models. Z.AI’s GLM-4.7 took the first position.

LG’s foundational model ranks seventh in global rankings

LG released its foundational model as an open-weight on Hugging Face and saw it climb to second place on the platform’s global model trend chart. This suggested a strong interest from international leaders. LG mentioned that it is ready to roll out free API access to K-Exaone through January 28. This will allow developers and firms to use the model without any cost during the initial rollout period.

Epoch AI, a US-based nonprofit, also hailed the model. The platform added the model to its list of notable AI models. LG AI research now has five models on the list, making it the Korean company with the most. “We established the development plan according to the time and infrastructure we were given, and we developed the first-phase K-Exaone using about half the data we have,” said Lee Jin-sik, head of Exaone Lab at LG AI Research.

According to LG, the model is the brainchild of five years of in-house research and signals Korea’s entry into the global race for frontier-class AI systems. The division of LG mentioned that instead of relying on only scale, it redesigned the architecture to boost performance while reducing training and operating costs. K-Exaone uses a mixture-of-experts (MoE) architecture with 236 billion parameters, with about 23 billion parameters activated per inference.

The K-Exaone model beats other models in several parameters

The model uses its core technology, hybrid attention, to enhance its ability to focus on important information during data processing while reducing requirements and computational load by 70% compared to the previous models. The tokenizer was also upgraded by expanding its training vocabulary to 150,000 words. In addition, it often optimizes frequently used word combinations, improving the ability to process documents 1.3 times.

In addition, the adoption of multi-token prediction boosted inference speed by 150%, improving overall efficiency. K-Exaone is designed to maximize efficiency while reducing costs, allowing it to run on A100-class GPUs rather than requiring the most expensive infrastructure,” an LG AI Research official said. “This makes frontier-level AI more accessible to companies with limited computing resources and helps broaden Korea’s AI ecosystem.”

Aside from memorization, K-Exaone is trained to focus on improving its reasoning and problem-solving capabilities. LG explained that during its pre-training stage, the model was exposed to thinking trajectory data that shows how problems are solved and not just the final answer. Safety and compliance were also other priorities for the model. LG mentioned that it carried out data compliance reviews across training datasets, removing materials with potential copyright issues.

The company runs an internal AI ethics committee that carries out risk assessment across four categories, including social safety, Korea-specific considerations, future risks, and universal human values. Under KGC-Safety, the benchmark developed by LG AI research for safety in Korea, K-Exaone scored an average of 97.38 across four categories. It outperformed OpenAI’s GPT-OSS-120B model and Alibaba’s Qwen-3-235B model.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Market Opportunity
Sidekick Logo
Sidekick Price(K)
$0.005142
$0.005142$0.005142
-1.07%
USD
Sidekick (K) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Stellar (XLM) Eyes $0.28 After Roadmap Signals Stablecoin and Lending Growth

Stellar (XLM) Eyes $0.28 After Roadmap Signals Stablecoin and Lending Growth

Stellar (XLM) is taking major steps in the world of DeFi with its new Q1 2026 roadmap that has been rolled out. This new roadmap is focused on the upcoming mainnet
Share
Tronweekly2026/01/12 03:30
X Smart Cashtags: Elon Musk’s Platform Eyes Crypto and Stock Trading Integration

X Smart Cashtags: Elon Musk’s Platform Eyes Crypto and Stock Trading Integration

A newly teased feature called Smart Cashtags, revealed by X’s head of product Nikita Bier, suggests the platform is moving beyond passive market commentary toward
Share
Coinstats2026/01/12 02:18
Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

The post Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC appeared on BitcoinEthereumNews.com. Franklin Templeton CEO Jenny Johnson has weighed in on whether the Federal Reserve should make a 25 basis points (bps) Fed rate cut or 50 bps cut. This comes ahead of the Fed decision today at today’s FOMC meeting, with the market pricing in a 25 bps cut. Bitcoin and the broader crypto market are currently trading flat ahead of the rate cut decision. Franklin Templeton CEO Weighs In On Potential FOMC Decision In a CNBC interview, Jenny Johnson said that she expects the Fed to make a 25 bps cut today instead of a 50 bps cut. She acknowledged the jobs data, which suggested that the labor market is weakening. However, she noted that this data is backward-looking, indicating that it doesn’t show the current state of the economy. She alluded to the wage growth, which she remarked is an indication of a robust labor market. She added that retail sales are up and that consumers are still spending, despite inflation being sticky at 3%, which makes a case for why the FOMC should opt against a 50-basis-point Fed rate cut. In line with this, the Franklin Templeton CEO said that she would go with a 25 bps rate cut if she were Jerome Powell. She remarked that the Fed still has the October and December FOMC meetings to make further cuts if the incoming data warrants it. Johnson also asserted that the data show a robust economy. However, she noted that there can’t be an argument for no Fed rate cut since Powell already signaled at Jackson Hole that they were likely to lower interest rates at this meeting due to concerns over a weakening labor market. Notably, her comment comes as experts argue for both sides on why the Fed should make a 25 bps cut or…
Share
BitcoinEthereumNews2025/09/18 00:36