Qwen AI, developed by Alibaba Cloud, is a powerful suite of artificial intelligence models. It was first launched in beta in April 2023 under Tongyi Qianwen and is based on Meta’s Llama model. The platform can perform various tasks, including natural language processing (NLP), coding, solving complex mathematical problems, and understanding different media types, such as text, code, math, and video.
Qwen AI is available in various sizes, with models ranging from 0.5 billion to 72 billion parameters. This range allows it to perform at different levels depending on the task. Since its launch, over 90,000 enterprise users have deployed Qwen AI through Model Studio. Its open-source models have experienced significant engagement, with more than 7 million downloads. Additionally, the release of Qwen AI 2.5 Max, the most powerful model, in January 2025 has enhanced Qwen’s capabilities, reportedly outperforming models like GPT-4o, DeepSeek-V3, and Llama-3.1-405B in benchmarks.
In this blog, we’ll take a closer look at the latest updates, models, and key statistics about Qwen AI, and explore why it’s creating such a significant impact in the world of artificial intelligence.
Qwen AI Statistics 2025 (Editor’s Picks)
- Over 2.2 million corporate users access Qwen’s AI services through DingTalk, Alibaba’s platform for work collaboration.
- The Qwen AI model is utilized by over 90,000 enterprises as of 2025.
- Qwen AI 2.5 supports more than 29 languages.
- The open-source Qwen series features models ranging from 0.5 billion to 110 billion parameters.
- Qwen2.5 Math, with 72 billion parameters, solves math problems with 84% accuracy.
- The latest Qwen 2.5 models are pre-trained on a new large-scale dataset that includes up to 18 trillion tokens.
- Qwen2.5-Coder (an open-source coding model) has been trained on 5.5 trillion tokens.
- Qwen 2.5 Coder supports 92 programming languages.
Brief Overview of Qwen AI
Developed In | July 2024 |
Beta Release | April 2023 |
Stable Release | 2.5-Max / January 2025 |
Headquarters | Hangzhou, China |
Founder | Alibaba Group |
CEO | Daniel Zhang |
Type | Chatbot |
Country | China |
Industry | Artificial Intelligence |
The Evolution of Qwen AI
Year | Launches of Qwen AI Versions |
April 2023 | The beta version of Qwen launched as Tongyi Qianwen, based on Meta AI’s LLaMA with modifications. |
August 2023 | Open-sourced Qwen 7B model. |
September 2023 | Public release of Qwen after Chinese government approval. |
December 2023 | Released 72B and 1.8B models as open-source. |
June 2024 | Launch of Qwen 2, employing a mixture of experts. |
September 2024 | Open-sourced some Qwen 2 models while keeping advanced models proprietary. |
November 2024 | Released QwQ-32B-Preview under Apache 2.0 License, focusing on reasoning with a 32,000 token context length. |
January 2025 | Launched Qwen 2.5-Max, the most powerful model to date. |
The Number of Qwen AI Users
Although the exact number of users for Qwen AI remains unspecified, the platform has gained significant traction among businesses worldwide. Over 90,000 enterprises have adopted Qwen AI through Model Studio, an AI platform developed by Alibaba Cloud.
In addition, more than 2.2 million corporate users engage with Qwen AI services via DingTalk, Alibaba’s platform for collaboration and application development.
Source: Alibaba Cloud
Qwen AI Total Number of Downloads
The Qwen AI models have been downloaded over 7 million times on popular platforms like Hugging Face and GitHub.
Source: Alibaba Cloud
Qwen AI Users by Country
27.52% of Qwen AI’s traffic originates from Iraq, making it the leading country driving engagement with the platform. Brazil comes in second with 19.08% of traffic, followed by Turkey at 12.10%.
In contrast, the United States contributes only 6.15% of the traffic, a relatively small figure compared to the top three countries. Russia, with a 10.60% share, also ranks lower on the list.
India does not make the top five. One primary reason for this lower adoption may be the lack of marketing efforts, especially when compared to the significant promotion that DeepSeek has received. Additionally, India has already experienced widespread adoption of other popular AI models, such as ChatGPT and CoPilot.
The top 5 countries with the highest number of Qwen AI users
Countries | Qwen AI Traffic Share |
Iraq | 27.52% |
Brazil | 19.08% |
Turkey | 12.10% |
Russia | 10.60% |
United States | 6.15% |
Source: Similarweb
Qwen AI Total Monthly Visits
As of February 2025, the Qwen AI website had 25,245 monthly visits. Compared to the previous month, there was a growth of 1114.64%. This growth shows that more people are starting to use Qwen AI, likely due to new features and better marketing efforts.
Source: Similarweb
65.95% of Qwen AI users access the platform through mobile
6 out of 10 Qwen AI users access the platform via mobile web, while 34.05% use desktop devices. This indicates that most users prefer using Qwen AI on their mobile devices, which could be due to the convenience and accessibility mobile platforms offer, especially for users who need to access AI services while on the go. This trend highlights the necessity for platforms to enhance mobile experiences for the majority of users.
Devices | % of Qwen AI Users |
Desktop | 34.05% |
Mobile Web | 65.95% |
Source: Similarweb
Different Qwen AI Models and Their Capabilities (What Qwen AI Can Do)
- Qwen 2.5 Models: The Qwen 2.5 models are trained with high-quality data from various domains and languages, supporting up to 128K tokens. These models are great at coding, math, and understanding structured information.
- Qwen-VL: This is a large vision language model that can work with images, text, and even bounding boxes. It’s good at reading text in both Chinese and English, comparing images, and then creating stories, solving math problems, or answering questions based on what it sees.
- Qwen-Audio: It listens to both text and different types of audio, like human speech, music, or natural sounds. It then gives text-based answers. It works well across many audio-related tasks without needing specific fine-tuning.
- Qwen2.5-Coder: It’s an open-source coding model that supports 92 programming languages. It can handle up to 128K tokens and has shown impressive results in code generation, multi-programming, code completion, and debugging, making it a powerful tool for developers.
- Qwen2.5-Math: It’s a specialized model for mathematical tasks. Trained with synthesized data, it supports both English and Chinese queries and excels in reasoning tasks. It outperforms many other models in solving complex mathematical problems, making it a top choice for math-related challenges.
Source: Qwen-ai
Performance Analysis of Qwen2.5-Max vs. Other AI Models
The image presents a bar chart comparing the performance of Qwen2.5-Max against other state-of-the-art AI models, including DeepSeek-V3, Llama-3.1-405B-Inst, GPT-4o-0806, and Claude-3.5-Sonnet-1022. These assess the model’s ability in areas such as knowledge retention, coding proficiency, general AI performance, and human preference alignment.
Key Benchmarks
- Arena-Hard: Assesses human preference alignment by approximating how well a model responds in a human-like manner.
- MMLU-Pro: Tests knowledge and reasoning using college-level problems.
- GPQA-Diamond – Excellent for questions and answers.
- LiveCodeBench: Measures the model’s coding capabilities.
- LiveBench: Evaluates overall AI performance across multiple tasks.
- Arena-Hard: Qwen2.5-Max achieves the highest score (89.4) in the Arena-Hard benchmark, outperforming DeepSeek-V3 (85.5) and Claude-3.5-Sonnet (85.2). This suggests that Qwen2.5-Max provides responses that align well with human preferences.
- MMLU-Pro: In the MMLU-Pro benchmark, which tests knowledge and logical reasoning, Qwen2.5-Max scores 76.1, closely competing with DeepSeek-V3 (75.9) and GPT-4o (78.0). This indicates that Qwen2.5-Max is highly capable of handling complex knowledge-based queries.
- GPQA-Diamond: Qwen2.5-Max scores 60.1 in the GPQA-Diamond benchmark, slightly outperforming DeepSeek-V3 (59.1) but falling behind GPT-4o (65.0). This suggests that while Qwen2.5-Max is strong in generating accurate answers.
- LiveCodeBench: In the LiveCodeBench benchmark, which evaluates coding proficiency, Qwen2.5-Max scores 38.7. This is slightly better than DeepSeek-V3 (37.6) but just behind Claude-3.5-Sonnet (38.9). At the same time, Qwen2.5-Max shows competitive coding capabilities.
- LiveBench: Qwen2.5-Max leads the LiveBench benchmark with a score of 62.2, surpassing DeepSeek-V3 (60.5) and Claude-3.5-Sonnet (60.3). This highlights Qwen2.5-Max as a well-balanced AI model capable of performing strongly across various tasks, from reasoning to coding.
Source: Qwenlm
Benchmark Analysis and Comparison of Qwen2.5-Max and Leading Base Models
This analysis compares four open-AI base models—Qwen2.5-Max, Qwen2.5-72B, DeepSeek V3, and LLaMA 3.1-405B—across a series of critical benchmarks, assessing their general reasoning, coding proficiency, and mathematical problem-solving skills. By examining how each model handles these diverse tasks, the analysis aims to provide a deeper understanding of their strengths and weaknesses.
AI Models | |||||
Benchmarks | Proficiency | Qwen 2.5 Max | Qwen 2.5-72B | DeepSeek-V3 | LLaMA3.1-405B |
MMLU | General Knowledge & Language Understanding | 87.9 | 86.1 | 87.1 | 85.2 |
MMLU-Pro | 69.0 | 58.1 | 64.4 | 61.6 | |
BBH | 89.3 | 86.3 | 87.5 | 85.9 | |
C-Eval | 92.2 | 90.7 | 90.1 | 72.5 | |
CMMLU | 91.9 | 89.9 | 88.8 | 73.7 | |
HumanEval | Coding and Problem Solving | 73.2 | 64.6 | 65.2 | 61.0 |
MBPP | 80.6 | 72.6 | 75.4 | 73.0 | |
CRUX-l | 70.1 | 60.6 | 67.3 | 58.5 | |
CRUX-O | 79.1 | 66.6 | 69.8 | 59.9 | |
GSM8K | Mathematical Problem Solving | 94.5 | 91.5 | 89.3 | 89.0 |
MATH | 68.5 | 62.1 | 61.6 | 53.8 |
1. General Knowledge and Language Understanding
Qwen2.5-Max leads in understanding complex language and general knowledge tasks, scoring 87.9 on MMLU and 92.2 on C-Eval. It outperforms DeepSeek V3 and LLaMA 3.1-405B, making it the most capable of handling diverse linguistic challenges.
2. Coding and Problem-Solving
Qwen2.5-Max performs best in coding tasks, scoring 73.2 on HumanEval and 80.6 on MBPP. It slightly edges out DeepSeek V3 and is far ahead of LLaMA 3.1-405B. This makes it ideal for generating and debugging code.
3. Mathematical Problem Solving
Qwen2.5-Max excels in math problem-solving, achieving 94.5 on GSM8K. This is significantly better than DeepSeek V3 and LLaMA 3.1-405B. On more complex problems, it scores 68.5, still leading but with room to improve.
Qwen2.5-Max stands as the most powerful base model across all major benchmarks, demonstrating superior performance in general reasoning, coding tasks, and mathematical problem-solving. DeepSeek V3 provides a strong challenge but falls short in advanced reasoning and coding tasks.
Source: Qwen-ai
Frequently Asked Questions
1) What is Qwen AI?
Qwen AI is a series of advanced models created by Alibaba Cloud. These models can handle tasks like language processing, coding, and math. They are designed to work with text, audio, video, and images, making them versatile for many uses.
2) What is Qwen 2.5 Max?
Qwen 2.5 Max is a more advanced version of the Qwen 2.5 series. It is an open-source large language model that can handle up to 128,000 tokens in context length, making it highly efficient for processing large amounts of data. The Qwen 2.5 series includes models ranging from 0.5 billion to 72 billion parameters and supports over 29 languages, making it versatile for various tasks, including natural language processing, coding, and solving mathematical problems.
3) Are Qwen Models are open-source?
Yes, Qwen models are open-source, allowing developers to freely use and customize them for various tasks. This open access encourages innovation and makes Qwen a useful tool for anyone looking to incorporate AI into their projects.
4) Is Qwen AI free?
Yes, all Qwen AI models are currently available for free. This allows anyone to use the models without any cost, making them accessible to developers and organizations looking to leverage AI for different tasks.
5) Who owns Qwen?
Qwen AI is owned by Alibaba Cloud, a division of Alibaba Group. Alibaba is a leading global technology company based in China, known for its advancements in artificial intelligence and cloud computing.
6) Is Qwen better than DeepSeek?
Qwen 2.5 AI is considered superior in certain aspects, especially with its advanced capabilities in natural language processing and handling multimodal data. It offers improved performance compared to DeepSeek V3, particularly in tasks involving large datasets and diverse applications.