Conversation
Signed-off-by: Andrew Xia <axia@fb.com>
5dfe9a5 to
2e30e4d
Compare
|
@chaunceyjiang @esmeetu @yeqcharlotte @sfeng33 please take a look, this is ready now :) |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6fdb94ca11
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| ```python | ||
| from openai import OpenAI | ||
|
|
||
| client = OpenAI(base_url="http://localhost:8000/v1") |
There was a problem hiding this comment.
Pass an API key when creating OpenAI client
This example initializes OpenAI without api_key (and without showing OPENAI_API_KEY setup), which causes the snippet to fail in a fresh environment before any request is sent. Readers following this “Getting Started” flow are likely to hit a client initialization error unless they already have the env var set, so the sample should include a placeholder key (for example api_key="null") or an explicit env export step.
Useful? React with 👍 / 👎.
| ## Evals | ||
|
|
||
| With vLLM's ResponsesAPI implementation, we were able to replicate Kimi K2's HLE score of 23.9. We used the open-source HLE test harness with OpenAI's o3-mini as a judge. We also ran GPT-OSS against the vLLM ResponsesAPI with MCP tools (including browser, python, and container). With high reasoning on GPT-OSS 120B, we achieved a score of 0.97 on AIME 2025, which matches OpenAI's GPT-OSS model card. |
There was a problem hiding this comment.
Can you show an example of how the eval was run? Does it work with models other than gpt-oss?
| layout: post | ||
| title: "Enabling ResponsesAPI and MCP on vLLM" | ||
| author: "Meta" | ||
| image: /assets/logos/vllm-logo-text-light.png |
There was a problem hiding this comment.
Can we make a architechure or overview image of ResponseAPI here?
Besides, add frontend tag for this blog.
No description provided.