Conversation
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
|
@claude solve the merge conflicts |
|
Claude finished @functionstackx's task in 2m 0s —— View job Merge Conflicts Resolved
Pushed commit |
Resolve merge conflicts in 3 files: - .github/configs/nvidia-master.yaml: keep expanded search space from PR - benchmarks/single_node/minimaxm2.5_fp8_b200.sh: keep --kv-cache-dtype fp8 and --stream-interval 20 with --no-enable-prefix-caching from main - perf-changelog.yaml: keep both main's new entries and PR's entry Co-authored-by: functionstackx <functionstackx@users.noreply.github.com>
functionstackx
left a comment
There was a problem hiding this comment.
hi @faradawn @kedarpotdar-nv can u update & merge ur existing recipe PR first before we merge this inferencex PR? vllm-project/recipes#272
usually we allow the inferencex PR to merge first as the recipe PR usually merges as an fast follow, but the reason why the recipes PR needs to merge first is cuz the previous PR's recipe update PR still didnt merge..
|
@claude remove isl 1024 osl: 8192 |
|
Claude finished @functionstackx's task in 57s —— View job Remove ISL 1024 OSL 8192 Configuration
Removed the ISL 1024 / OSL 8192 seq-len config (with tp:2 and tp:4 search-space entries) from Pushed commit |
Co-authored-by: functionstackx <functionstackx@users.noreply.github.com>
|
validated run here https://github.com/SemiAnalysisAI/InferenceX/actions/runs/23694341288 |
|
@kedarpotdar-nv can we not use an nightly image here? Minimax M2 arch been around for multiple months now. And from looking at the master.yaml files, minimax is all using release image |

No description provided.