Skip to content

[AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5#963

Open
seungrokj wants to merge 8 commits intomainfrom
srok/atom_kimik2.5_minimaxm2.5_glm5
Open

[AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5#963
seungrokj wants to merge 8 commits intomainfrom
srok/atom_kimik2.5_minimaxm2.5_glm5

Conversation

@seungrokj
Copy link
Copy Markdown
Collaborator

@seungrokj seungrokj commented Mar 27, 2026

Hi @functionstackx @cquil11

This is a PR to include ATOM framework of the following models.

- kimik2.5-fp4-mi355x-atom
- glm5-fp8-mi355x-atom
- minimaxm2.5-fp8-mi355x-atom

Docker image will be ready shortly.

cc. @ChuanLi1101 @andyluo7 @chunfangamd

Regards,
Seungrok

….5 FP8

Signed-off-by: seungrokj <seungrok.jung@amd.com>
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

2 similar comments
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

….5 FP8

Signed-off-by: seungrokj <seungrok.jung@amd.com>
Comment on lines +46 to +55
--model $MODEL \
--server-port $PORT \
-tp $TP \
--kv_cache_dtype fp8 $CALCULATED_MAX_MODEL_LEN $EP \
--trust-remote-code \
> $SERVER_LOG 2>&1 &

SERVER_PID=$!

# Wait for server to be ready
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 All three new non-MTP ATOM benchmark scripts (glm5_fp8_mi355x_atom.sh, kimik2.5_fp4_mi355x_atom.sh, minimaxm2.5_fp8_mi355x_atom.sh) are missing the BLOCK_SIZE=${BLOCK_SIZE:-16} variable and --block-size $BLOCK_SIZE flag present in all other non-MTP ATOM scripts. Without this, the ATOM server uses its framework default block size, which may differ from the tuned value of 16 and could cause performance degradation or OOM at high concurrency.

Extended reasoning...

What the bug is and how it manifests

All three new non-MTP ATOM benchmark scripts introduced in this PR are missing the --block-size parameter when launching the ATOM inference server. The existing non-MTP ATOM scripts (dsr1_fp8_mi355x_atom.sh, dsr1_fp4_mi355x_atom.sh, gptoss_fp4_mi355x_atom.sh) all set BLOCK_SIZE=${BLOCK_SIZE:-16} and pass --block-size $BLOCK_SIZE to the server. The three new scripts omit this entirely, so the ATOM framework will use whatever block size it defaults to internally.

The specific code path that triggers it

In all three new scripts (e.g. glm5_fp8_mi355x_atom.sh lines 46–55), the server launch invocation is:

python3 -m atom.entrypoints.openai_server     --model $MODEL     --server-port $PORT     -tp $TP     --kv_cache_dtype fp8 $CALCULATED_MAX_MODEL_LEN $EP     --trust-remote-code     > $SERVER_LOG 2>&1 &

Compare with dsr1_fp8_mi355x_atom.sh (line 45, 51):

BLOCK_SIZE=${BLOCK_SIZE:-16}
...
    --block-size $BLOCK_SIZE > $SERVER_LOG 2>&1 &

Why existing code doesn't prevent it

There is no default --block-size guard in the shared benchmark_lib.sh — each individual script is responsible for passing this flag. The three new scripts simply don't include it.

Addressing the refutation

One verifier noted that the MTP ATOM scripts (dsr1_fp4_mi355x_atom_mtp.sh and dsr1_fp8_mi355x_atom_mtp.sh) added in PR #673 also omit --block-size. However, those scripts use --method mtp for speculative decoding, which is architecturally distinct from standard inference and may have different (or conflicting) block-size requirements. The new scripts in this PR are standard (non-MTP) inference scripts and should logically follow the non-MTP pattern, not the MTP pattern. The omission in MTP scripts appears intentional for MTP-specific reasons, not a general deprecation of the flag.

Impact

ATOM's block size directly controls KV cache page allocation. If the framework default differs from 16, it can cause increased memory fragmentation, reduced KV cache utilization at high concurrency (conc-end: 128 or 256 in these configs), or OOM errors during sweep benchmarks. Given that the concurrency ranges here (up to 256 for minimaxm2.5) exceed those in the original ATOM scripts, the risk is higher.

Step-by-step proof

  1. User runs glm5-fp8-mi355x-atom benchmark with ISL=1024, OSL=8192, TP=8, CONC=128.
  2. Harness calls glm5_fp8_mi355x_atom.sh — no BLOCK_SIZE variable is set.
  3. Server launches without --block-size; ATOM uses its internal default (e.g. 32 or 64).
  4. At CONC=128 with OSL=8192, KV cache demand is high; a larger block size leads to more wasted space per page and potentially OOM, while the tuned value of 16 was established specifically for MI355X ATOM.
  5. For comparison, dsr1_fp8_mi355x_atom.sh with the same hardware would launch with --block-size 16 and succeed.

How to fix

Add the following two lines to each new script, matching the pattern in the existing non-MTP ATOM scripts:

BLOCK_SIZE=${BLOCK_SIZE:-16}

And add --block-size $BLOCK_SIZE to the python3 -m atom.entrypoints.openai_server invocation in all three new scripts.

description:
- "New model support on ATOM framework"
- "Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 FP8 configs added for MI355X ATOM"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/963
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The perf-changelog.yaml entry added in this PR references the wrong PR number: pr-link points to pull/954, but this is PR #963. PR #954 is a separate merged PR ('kimik2.5 int4 changes from mi355x'). The link should be updated to https://github.com/SemiAnalysisAI/InferenceX/pull/963 to correctly attribute the three ATOM configs (kimik2.5-fp4-mi355x-atom, glm5-fp8-mi355x-atom, minimaxm2.5-fp8-mi355x-atom) to this PR.

Extended reasoning...

Bug description: The last entry added to perf-changelog.yaml in this PR uses an incorrect pr-link value. The link points to https://github.com/SemiAnalysisAI/InferenceX/pull/954, which is a different, already-merged PR ('mi325x: port kimik2.5 int4 changes from mi355x PR #950', commit cec542a). The current PR being submitted is #963.

Code path: In perf-changelog.yaml at the final entry (around line 1119), the newly added block reads:

- config-keys:
    - kimik2.5-fp4-mi355x-atom
    - glm5-fp8-mi355x-atom
    - minimaxm2.5-fp8-mi355x-atom
  description:
    - "New model support on ATOM framework"
    - "Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 FP8 configs added for MI355X ATOM"
  pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/954

Why existing code doesn't prevent it: The changelog is a static YAML file with no automated validation of PR numbers against the current PR context. There is no CI check that cross-references the pr-link value with the actual PR number being submitted, making copy-paste errors like this easy to introduce.

Impact: While this is a documentation-only error with no functional impact, the changelog serves as an important audit trail for which PR introduced which configuration. With the wrong link, a future developer investigating kimik2.5-fp4-mi355x-atom, glm5-fp8-mi355x-atom, or minimaxm2.5-fp8-mi355x-atom would be led to PR #954, which is about an entirely different set of changes (MI325X INT4 configs), creating false traceability.

Step-by-step proof:

  1. This PR is [AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 #963, as confirmed by the PR metadata (<pr number="963">).
  2. The git log shows commit cec542a with message 'mi325x: port kimik2.5 int4 changes from mi355x PR [AMD/ROCm] kimik2.5 int4 mi355x: upgrade to vllm-openai-rocm:v0.18.0 #950' — this is PR mi325: update kimi int4 #954.
  3. The last perf-changelog.yaml entry uses pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/954.
  4. The correct value should be https://github.com/SemiAnalysisAI/InferenceX/pull/963.

Fix: Change the pr-link in the final changelog entry from .../pull/954 to .../pull/963.

Copy link
Copy Markdown
Contributor

@functionstackx functionstackx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

disable prefix caching for consistency plz

@SemiAnalysisAI SemiAnalysisAI deleted a comment from github-actions bot Mar 30, 2026
@seungrokj
Copy link
Copy Markdown
Collaborator Author

disable prefix caching for consistency plz

@functionstackx in atom, prefix caching is OFF by default https://github.com/ROCm/ATOM/blob/0079204e23922ddca3b28a438610d29a82ea20bb/atom/model_engine/arg_utils.py#L81-L84

@SemiAnalysisAI SemiAnalysisAI deleted a comment from github-actions bot Mar 30, 2026
@seungrokj
Copy link
Copy Markdown
Collaborator Author

@functionstackx functionstackx requested review from Oseltamivir, cquil11 and functionstackx and removed request for functionstackx March 30, 2026 06:53
@seungrokj
Copy link
Copy Markdown
Collaborator Author

@functionstackx @cquil11 can you please help merging this PR ?

@functionstackx
Copy link
Copy Markdown
Contributor

@seungrokj the validations run u linked r failing

….5 FP8

Signed-off-by: seungrokj <seungrok.jung@amd.com>
@seungrokj
Copy link
Copy Markdown
Collaborator Author

@seungrokj the validations run u linked r failing

oops. it was my problem. just fixed. and will respin..

Copy link
Copy Markdown
Collaborator

@chunfangamd chunfangamd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@chunfangamd chunfangamd enabled auto-merge (squash) March 31, 2026 05:39
@chunfangamd chunfangamd disabled auto-merge March 31, 2026 05:46
Copy link
Copy Markdown
Contributor

@functionstackx functionstackx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz send links to passing validation runs and i can approve

Comment on lines +255 to +258
- isl: 1024
osl: 8192
search-space:
- { tp: 8, conc-start: 4, conc-end: 128 }
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, as agreed upon by andy and us, we dont do 1k/8k anymore as it takes too long

we alreayd removed 1k/8k

and will soon add agentic coding & multi turn chat which @cquil11 is working on with andy

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@functionstackx just resolved.
Also regarding the conc-end: 128, do you prefer we now limit to 64 ? Because I see some data points are still using 128/256 but I remember we are supposed to run up to conc-end: 64 in 1k1k/8k1k setting.

….5 FP8

Signed-off-by: seungrokj <seungrok.jung@amd.com>
- isl: 1024
osl: 1024
search-space:
- { tp: 8, conc-start: 4, conc-end: 128 }
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@functionstackx do you prefer to limit conc-end to 64 ? I see some existing data still run up to 128 and 256.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

4 participants