Meituan Quietly Launches LongCat-2.0-Preview AI Model With Trillion Parameters, No Official Announcement

Gate News message, April 28 — Meituan has quietly rolled out a new AI model, LongCat-2.0-Preview, on its LongCat API platform with an update log dated April 20, but has not issued any official announcement or technical report. Unlike previous LongCat series models (Flash-Chat, Flash-Thinking, Flash-Lite, Flash-Omni, Next), which shipped with official blog posts, technical reports, and open-source releases on Hugging Face and GitHub, the 2.0-Preview version offers no open-source links and is available exclusively via API.

The model’s update log highlights three core capabilities: agent development with native support for tool calling, multi-step reasoning, and long-context tasks; proficiency in code generation, workflow automation, and complex instruction execution; and deep integration with Claude Code, OpenClaw, OpenCode, and Kilo Code. According to reports from multiple media outlets citing sources on April 24, the model’s total parameters exceed one trillion, employs a MoE (Mixture of Experts) architecture, and supports a 1 million-token context window—comparable in scale to DeepSeek V4, also released that day.

Insiders revealed that LongCat-2.0-Preview was trained entirely on domestic computing clusters using between 50,000 and 60,000 Chinese-made accelerator cards, marking the largest-scale training task completed on domestic AI infrastructure to date. During the testing phase, the model provides a free daily allowance of 10 million tokens.

免責聲明:本頁面資訊可能來自第三方,不代表 Gate 的觀點或意見。頁面顯示的內容僅供參考,不構成任何財務、投資或法律建議。Gate 對資訊的準確性、完整性不作保證,對因使用本資訊而產生的任何損失不承擔責任。虛擬資產投資屬高風險行為,價格波動劇烈,您可能損失全部投資本金。請充分了解相關風險,並根據自身財務狀況和風險承受能力謹慎決策。具體內容詳見聲明
回覆
0/400
暫無回覆