[AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5#963
[AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5#963
Conversation
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
2 similar comments
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
| --model $MODEL \ | ||
| --server-port $PORT \ | ||
| -tp $TP \ | ||
| --kv_cache_dtype fp8 $CALCULATED_MAX_MODEL_LEN $EP \ | ||
| --trust-remote-code \ | ||
| > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready |
There was a problem hiding this comment.
🔴 All three new non-MTP ATOM benchmark scripts (glm5_fp8_mi355x_atom.sh, kimik2.5_fp4_mi355x_atom.sh, minimaxm2.5_fp8_mi355x_atom.sh) are missing the BLOCK_SIZE=${BLOCK_SIZE:-16} variable and --block-size $BLOCK_SIZE flag present in all other non-MTP ATOM scripts. Without this, the ATOM server uses its framework default block size, which may differ from the tuned value of 16 and could cause performance degradation or OOM at high concurrency.
Extended reasoning...
What the bug is and how it manifests
All three new non-MTP ATOM benchmark scripts introduced in this PR are missing the --block-size parameter when launching the ATOM inference server. The existing non-MTP ATOM scripts (dsr1_fp8_mi355x_atom.sh, dsr1_fp4_mi355x_atom.sh, gptoss_fp4_mi355x_atom.sh) all set BLOCK_SIZE=${BLOCK_SIZE:-16} and pass --block-size $BLOCK_SIZE to the server. The three new scripts omit this entirely, so the ATOM framework will use whatever block size it defaults to internally.
The specific code path that triggers it
In all three new scripts (e.g. glm5_fp8_mi355x_atom.sh lines 46–55), the server launch invocation is:
python3 -m atom.entrypoints.openai_server --model $MODEL --server-port $PORT -tp $TP --kv_cache_dtype fp8 $CALCULATED_MAX_MODEL_LEN $EP --trust-remote-code > $SERVER_LOG 2>&1 &Compare with dsr1_fp8_mi355x_atom.sh (line 45, 51):
BLOCK_SIZE=${BLOCK_SIZE:-16}
...
--block-size $BLOCK_SIZE > $SERVER_LOG 2>&1 &Why existing code doesn't prevent it
There is no default --block-size guard in the shared benchmark_lib.sh — each individual script is responsible for passing this flag. The three new scripts simply don't include it.
Addressing the refutation
One verifier noted that the MTP ATOM scripts (dsr1_fp4_mi355x_atom_mtp.sh and dsr1_fp8_mi355x_atom_mtp.sh) added in PR #673 also omit --block-size. However, those scripts use --method mtp for speculative decoding, which is architecturally distinct from standard inference and may have different (or conflicting) block-size requirements. The new scripts in this PR are standard (non-MTP) inference scripts and should logically follow the non-MTP pattern, not the MTP pattern. The omission in MTP scripts appears intentional for MTP-specific reasons, not a general deprecation of the flag.
Impact
ATOM's block size directly controls KV cache page allocation. If the framework default differs from 16, it can cause increased memory fragmentation, reduced KV cache utilization at high concurrency (conc-end: 128 or 256 in these configs), or OOM errors during sweep benchmarks. Given that the concurrency ranges here (up to 256 for minimaxm2.5) exceed those in the original ATOM scripts, the risk is higher.
Step-by-step proof
- User runs
glm5-fp8-mi355x-atombenchmark with ISL=1024, OSL=8192, TP=8, CONC=128. - Harness calls
glm5_fp8_mi355x_atom.sh— noBLOCK_SIZEvariable is set. - Server launches without
--block-size; ATOM uses its internal default (e.g. 32 or 64). - At CONC=128 with OSL=8192, KV cache demand is high; a larger block size leads to more wasted space per page and potentially OOM, while the tuned value of 16 was established specifically for MI355X ATOM.
- For comparison,
dsr1_fp8_mi355x_atom.shwith the same hardware would launch with--block-size 16and succeed.
How to fix
Add the following two lines to each new script, matching the pattern in the existing non-MTP ATOM scripts:
BLOCK_SIZE=${BLOCK_SIZE:-16}And add --block-size $BLOCK_SIZE to the python3 -m atom.entrypoints.openai_server invocation in all three new scripts.
| description: | ||
| - "New model support on ATOM framework" | ||
| - "Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 FP8 configs added for MI355X ATOM" | ||
| pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/963 |
There was a problem hiding this comment.
🟡 The perf-changelog.yaml entry added in this PR references the wrong PR number: pr-link points to pull/954, but this is PR #963. PR #954 is a separate merged PR ('kimik2.5 int4 changes from mi355x'). The link should be updated to https://github.com/SemiAnalysisAI/InferenceX/pull/963 to correctly attribute the three ATOM configs (kimik2.5-fp4-mi355x-atom, glm5-fp8-mi355x-atom, minimaxm2.5-fp8-mi355x-atom) to this PR.
Extended reasoning...
Bug description: The last entry added to perf-changelog.yaml in this PR uses an incorrect pr-link value. The link points to https://github.com/SemiAnalysisAI/InferenceX/pull/954, which is a different, already-merged PR ('mi325x: port kimik2.5 int4 changes from mi355x PR #950', commit cec542a). The current PR being submitted is #963.
Code path: In perf-changelog.yaml at the final entry (around line 1119), the newly added block reads:
- config-keys:
- kimik2.5-fp4-mi355x-atom
- glm5-fp8-mi355x-atom
- minimaxm2.5-fp8-mi355x-atom
description:
- "New model support on ATOM framework"
- "Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 FP8 configs added for MI355X ATOM"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/954Why existing code doesn't prevent it: The changelog is a static YAML file with no automated validation of PR numbers against the current PR context. There is no CI check that cross-references the pr-link value with the actual PR number being submitted, making copy-paste errors like this easy to introduce.
Impact: While this is a documentation-only error with no functional impact, the changelog serves as an important audit trail for which PR introduced which configuration. With the wrong link, a future developer investigating kimik2.5-fp4-mi355x-atom, glm5-fp8-mi355x-atom, or minimaxm2.5-fp8-mi355x-atom would be led to PR #954, which is about an entirely different set of changes (MI325X INT4 configs), creating false traceability.
Step-by-step proof:
- This PR is [AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 #963, as confirmed by the PR metadata (
<pr number="963">). - The git log shows commit
cec542awith message 'mi325x: port kimik2.5 int4 changes from mi355x PR [AMD/ROCm] kimik2.5 int4 mi355x: upgrade to vllm-openai-rocm:v0.18.0 #950' — this is PR mi325: update kimi int4 #954. - The last
perf-changelog.yamlentry usespr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/954. - The correct value should be
https://github.com/SemiAnalysisAI/InferenceX/pull/963.
Fix: Change the pr-link in the final changelog entry from .../pull/954 to .../pull/963.
functionstackx
left a comment
There was a problem hiding this comment.
disable prefix caching for consistency plz
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
…com/SemiAnalysisAI/InferenceX into srok/atom_kimik2.5_minimaxm2.5_glm5
@functionstackx in atom, prefix caching is OFF by default https://github.com/ROCm/ATOM/blob/0079204e23922ddca3b28a438610d29a82ea20bb/atom/model_engine/arg_utils.py#L81-L84 |
|
e2e Test - glm5-fp8-mi355x-atom |
|
@functionstackx @cquil11 can you please help merging this PR ? |
|
@seungrokj the validations run u linked r failing |
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
oops. it was my problem. just fixed. and will respin.. |
functionstackx
left a comment
There was a problem hiding this comment.
plz send links to passing validation runs and i can approve
.github/configs/amd-master.yaml
Outdated
| - isl: 1024 | ||
| osl: 8192 | ||
| search-space: | ||
| - { tp: 8, conc-start: 4, conc-end: 128 } |
There was a problem hiding this comment.
also, as agreed upon by andy and us, we dont do 1k/8k anymore as it takes too long
we alreayd removed 1k/8k
and will soon add agentic coding & multi turn chat which @cquil11 is working on with andy
There was a problem hiding this comment.
@functionstackx just resolved.
Also regarding the conc-end: 128, do you prefer we now limit to 64 ? Because I see some data points are still using 128/256 but I remember we are supposed to run up to conc-end: 64 in 1k1k/8k1k setting.
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
| - isl: 1024 | ||
| osl: 1024 | ||
| search-space: | ||
| - { tp: 8, conc-start: 4, conc-end: 128 } |
There was a problem hiding this comment.
@functionstackx do you prefer to limit conc-end to 64 ? I see some existing data still run up to 128 and 256.
Hi @functionstackx @cquil11
This is a PR to include ATOM framework of the following models.
Docker image will be ready shortly.
cc. @ChuanLi1101 @andyluo7 @chunfangamd
Regards,
Seungrok