Skip to content

Commit 8907010

Browse files
NSDiePotabkherizhenherizhenyiz-liu
authored
[Doc] Add tutorial for Qwen3-Coder-30B-A3B (#4391)
### What this PR does / why we need it? Add tutorial for Qwen3-Coder-30B-A3B - vLLM version: v0.11.2 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2 --------- Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: nsdie <yeyifan@huawei.com> Signed-off-by: herizhen <you@example.com> Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com> Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com> Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com> Signed-off-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com> Signed-off-by: weijinqian_v1 <weijinqian@huawei.com> Signed-off-by: weijinqian0 <1184188277@qq.com> Co-authored-by: Li Wang <wangli858794774@gmail.com> Co-authored-by: herizhen <59841270+herizhen@users.noreply.github.com> Co-authored-by: herizhen <you@example.com> Co-authored-by: Yizhou <136800916+yiz-liu@users.noreply.github.com> Co-authored-by: jiangyunfan1 <jiangyunfan1@h-partners.com> Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com> Co-authored-by: XiaoxinWang <963372609@qq.com> Co-authored-by: wangxiaoxin-sherie <wangxiaoxin7@huawei.com> Co-authored-by: weijinqian0 <1184188277@qq.com> Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
1 parent cb33b09 commit 8907010

File tree

3 files changed

+107
-1
lines changed

3 files changed

+107
-1
lines changed
Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
# Qwen3-Coder-30B-A3B
2+
3+
## Introduction
4+
5+
The newly released Qwen3-Coder-30B-A3B employs a sparse MoE architecture for efficient training and inference, delivering significant optimizations in agentic coding, extended context support of up to 1M tokens, and versatile function calling.
6+
7+
This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node deployment, accuracy and performance evaluation.
8+
9+
## Supported Features
10+
11+
Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.
12+
13+
Refer to [feature guide](../user_guide/feature_guide/index.md) to get the feature's configuration.
14+
15+
## Environment Preparation
16+
17+
### Model Weight
18+
19+
`Qwen3-Coder-30B-A3B-Instruct`(BF16 version): requires 1 Atlas 800 A3 node (with 16x 64G NPUs) or 1 Atlas 800 A2 node (with 8x 64G/32G NPUs). [Download model weight](https://modelers.cn/models/Modelers_Park/Qwen3-Coder-30B-A3B-Instruct)
20+
21+
It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`
22+
23+
### Installation
24+
25+
`Qwen3-Coder` is first supported in `vllm-ascend:v0.10.0rc1`, please run this model using a later version.
26+
27+
You can using our official docker image to run `Qwen3-Coder-30B-A3B-Instruct` directly.
28+
29+
```{code-block} bash
30+
:substitutions:
31+
# Update the vllm-ascend image
32+
export IMAGE=quay.io/ascend/vllm-ascend:v0.11.0rc1
33+
docker run --rm \
34+
--name vllm-ascend \
35+
--shm-size=1g \
36+
--device /dev/davinci0 \
37+
--device /dev/davinci1 \
38+
--device /dev/davinci2 \
39+
--device /dev/davinci3 \
40+
--device /dev/davinci_manager \
41+
--device /dev/devmm_svm \
42+
--device /dev/hisi_hdc \
43+
-v /usr/local/dcmi:/usr/local/dcmi \
44+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
45+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
46+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
47+
-v /etc/ascend_install.info:/etc/ascend_install.info \
48+
-v /root/.cache:/root/.cache \
49+
-p 8000:8000 \
50+
-it $IMAGE bash
51+
```
52+
53+
In addition, if you don't want to use the docker image as above, you can also build all from source:
54+
55+
- Install `vllm-ascend` from source, refer to [installation](../installation.md).
56+
57+
## Deployment
58+
59+
### Single-node Deployment
60+
61+
Run the following script to execute online inference.
62+
63+
For an Atlas A2 with 64 GB of NPU card memory, tensor-parallel-size should be at least 2, and for 32 GB of memory, tensor-parallel-size should be at least 4.
64+
65+
```shell
66+
#!/bin/sh
67+
export VLLM_USE_MODELSCOPE=true
68+
69+
vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct --served-model-name qwen3-coder --tensor-parallel-size 4 --enable_expert_parallel
70+
```
71+
72+
## Functional Verification
73+
74+
Once your server is started, you can query the model with input prompts:
75+
76+
```shell
77+
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
78+
"model": "qwen3-coder",
79+
"messages": [
80+
{"role": "user", "content": "Give me a short introduction to large language models."}
81+
],
82+
"temperature": 0.6,
83+
"top_p": 0.95,
84+
"top_k": 20,
85+
"max_tokens": 4096
86+
}'
87+
```
88+
89+
## Accuracy Evaluation
90+
91+
### Using AISBench
92+
93+
1. Refer to [Using AISBench](../developer_guide/evaluation/using_ais_bench.md) for details.
94+
95+
2. After execution, you can get the result, here is the result of `Qwen3-Coder-30B-A3B-Instruct` in `vllm-ascend:0.11.0rc0` for reference only.
96+
97+
| dataset | version | metric | mode | vllm-api-general-chat |
98+
|----- | ----- | ----- | ----- | -----|
99+
| openai_humaneval | f4a973 | humaneval_pass@1 | gen | 94.51 |
100+
101+
## Performance
102+
103+
### Using AISBench
104+
105+
Refer to [Using AISBench for performance evaluation](../developer_guide/evaluation/using_ais_bench.md#execute-performance-evaluation) for details.

docs/source/tutorials/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ multi_npu_qwen3_moe
1717
multi_npu_quantization
1818
single_node_300i
1919
DeepSeek-V3.2-Exp.md
20+
Qwen3-Coder-30B-A3B
2021
multi_node
2122
multi_node_kimi
2223
multi_node_qwen3vl

docs/source/user_guide/support_matrix/supported_models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160
1414
| DeepSeek Distill (Qwen/Llama) || |||||||||||||||||||
1515
| Qwen3 || |||||||||||||||||||
1616
| Qwen3-based || |||||||||||||||||||
17-
| Qwen3-Coder || |||||||||||||||||||
17+
| Qwen3-Coder || | A2/A3 |||||||||||||||||[Qwen3-Coder-30B-A3B tutorial](../../tutorials/Qwen3-Coder-30B-A3B.md)|
1818
| Qwen3-Moe || |||||||||||||||||||
1919
| Qwen3-Next || |||||||||||||||||||
2020
| Qwen2.5 || |||||||||||||||||||

0 commit comments

Comments
 (0)