messages/chat 接口到 /v1/responses 的操作手册本文目标:
messages / /v1/chat/completions/v1/responses 访问/v1/chat/completions部署完成后,请求链路如下:
客户端 / Codex / OpenAI SDK
|
v
http://127.0.0.1:4000/v1/responses
|
v
LiteLLM Proxy
|
v
你的上游接口
http://127.0.0.1:8000/v1/chat/completions你需要准备好以下内容:
http://127.0.0.1:8000/v1/chat/completions{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "你好"
}
]
}sk-upstream从 Python 官网安装 Windows 版 Python。
安装时务必勾选:
Add Python to PATH安装完成后,打开 PowerShell,执行:
python --version
pip --version如果能正常看到版本号,就说明 Python 已安装成功。
打开 PowerShell,执行:
mkdir C:\litellm-proxy
cd C:\litellm-proxy在 PowerShell 中执行以下命令:
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
pip install "litellm[proxy]"先执行:
Set-ExecutionPolicy -Scope CurrentUser RemoteSigned然后重新激活:
.\.venv\Scripts\Activate.ps1chat/completions 接口是否可用在接入 LiteLLM 之前,必须先确认你的上游接口本身是通的。
假设你的上游接口是:
http://127.0.0.1:8000/v1/chat/completions请执行以下 PowerShell 命令:
$body = @{
model = "gpt-4o-mini"
messages = @(
@{
role = "user"
content = "只回复:ok"
}
)
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:8000/v1/chat/completions" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-upstream"
"Content-Type" = "application/json"
} `
-Body $body把下面两项改成你自己的真实值:
model = "gpt-4o-mini"改成你的上游真实模型名。
"Authorization" = "Bearer sk-upstream"改成你的上游真实 key。
如果上游不校验 key,也可以暂时保留任意值。
返回结果中能看到类似下面字段:
{
"choices": [
{
"message": {
"content": "ok"
}
}
]
}只要这一步不通,后面的桥接一定不会成功。
config.yaml在目录:
C:\litellm-proxy中新建文件:
config.yaml写入以下内容:
model_list:
- model_name: my-bridge-model
litellm_params:
model: openai/gpt-4o-mini
api_base: http://127.0.0.1:8000
api_key: sk-upstream
general_settings:
master_key: sk-litellm-localconfig.yaml 每一项是什么意思model_namemodel_name: my-bridge-model这是客户端请求 LiteLLM 时使用的模型名。
例如你调用 LiteLLM 时,请求里会写:
{
"model": "my-bridge-model",
"input": "你好"
}litellm_params.modelmodel: openai/gpt-4o-mini这是 LiteLLM 实际转发给上游时使用的模型标识。
注意格式必须是:
openai/真实模型名例如:
model: openai/gpt-4o-mini如果你的上游真实模型名是:
my-model则写成:
model: openai/my-modelapi_baseapi_base: http://127.0.0.1:8000这里只写上游主地址,不要把完整接口路径写进去。
正确写法:
api_base: http://127.0.0.1:8000错误写法:
api_base: http://127.0.0.1:8000/v1
api_base: http://127.0.0.1:8000/v1/chat/completionsapi_keyapi_key: sk-upstream这是 LiteLLM 调上游时用的 key。
如果你的上游需要 key,就填真实值。
如果你的上游不校验,可以先写任意字符串。
master_keymaster_key: sk-litellm-local这是你访问 LiteLLM Proxy 自己时要带的 Bearer Token。
后面你调用 LiteLLM 时,请求头里要写:
Authorization: Bearer sk-litellm-local在 PowerShell 中执行:
cd C:\litellm-proxy
.\.venv\Scripts\Activate.ps1
litellm --config .\config.yaml --detailed_debug启动成功后,LiteLLM 通常会监听:
http://127.0.0.1:4000/v1/chat/completions这一步用于确认:
config.yamlmodel_name 配置正确执行:
$body = @{
model = "my-bridge-model"
messages = @(
@{
role = "user"
content = "只回复:bridge chat ok"
}
)
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:4000/v1/chat/completions" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-litellm-local"
"Content-Type" = "application/json"
} `
-Body $body返回中能看到正常文本回复,例如:
bridge chat ok说明普通 chat 转发已经通了。
/v1/responses这一步才是最终目标:让客户端访问 /v1/responses,由 LiteLLM 自动桥接到上游 chat/completions。
执行:
$body = @{
model = "my-bridge-model"
input = "只回复:bridge responses ok"
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:4000/v1/responses" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-litellm-local"
"Content-Type" = "application/json"
} `
-Body $body返回中能看到对应文本内容,表达出:
bridge responses ok这就说明:
/v1/responses/v1/chat/completions假设你的真实上游环境如下:
http://127.0.0.1:8000POST /v1/chat/completionsgpt-4o-minisk-upstream那么你最终的 config.yaml 如下:
model_list:
- model_name: my-bridge-model
litellm_params:
model: openai/gpt-4o-mini
api_base: http://127.0.0.1:8000
api_key: sk-upstream
general_settings:
master_key: sk-litellm-local$body = @{
model = "gpt-4o-mini"
messages = @(
@{
role = "user"
content = "只回复:ok"
}
)
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:8000/v1/chat/completions" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-upstream"
"Content-Type" = "application/json"
} `
-Body $bodycd C:\litellm-proxy
.\.venv\Scripts\Activate.ps1
litellm --config .\config.yaml --detailed_debug$body = @{
model = "my-bridge-model"
messages = @(
@{
role = "user"
content = "只回复:bridge chat ok"
}
)
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:4000/v1/chat/completions" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-litellm-local"
"Content-Type" = "application/json"
} `
-Body $body$body = @{
model = "my-bridge-model"
input = "只回复:bridge responses ok"
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:4000/v1/responses" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-litellm-local"
"Content-Type" = "application/json"
} `
-Body $bodymodel not found原因通常是请求中的:
{
"model": "my-bridge-model"
}与 config.yaml 里的:
model_name: my-bridge-model不一致。
通常是 api_base 写错了。
正确写法:
api_base: http://127.0.0.1:8000错误写法:
api_base: http://127.0.0.1:8000/v1
api_base: http://127.0.0.1:8000/v1/chat/completions说明 LiteLLM 调用上游时鉴权失败。
检查 config.yaml:
api_key: sk-upstream把它改成你的真实上游 key。
说明你调用 LiteLLM 时使用的 Bearer Token 不对。
检查请求头:
Authorization: Bearer sk-litellm-local它必须与 config.yaml 里的:
general_settings:
master_key: sk-litellm-local一致。
/chat/completions 能通,但 /responses 表现不符合预期这属于桥接方案的自然限制。
原因是:
/v1/chat/completions/v1/responses 桥接过去如果只是普通文本问答,通常没有问题。
如果涉及更复杂的原生 Responses 语义,就可能存在差异。
如果你想让 Codex 连接这个本地 LiteLLM,可以参考下面的 config.toml:
model = "my-bridge-model"
model_provider = "litellm_local"
[model_providers.litellm_local]
name = "LiteLLM Local"
base_url = "http://127.0.0.1:4000/v1"
env_key = "LITELLM_PROXY_API_KEY"
wire_api = "responses"然后在 PowerShell 中设置环境变量:
$env:LITELLM_PROXY_API_KEY = "sk-litellm-local"/v1/chat/completions 是否可用config.yaml 中的 litellm_params.model 必须写成 openai/真实模型名api_base 只能写上游主地址,不能拼 /v1 或 /v1/chat/completionsconfig.yamlmodel_list:
- model_name: my-bridge-model
litellm_params:
model: openai/gpt-4o-mini
api_base: http://127.0.0.1:8000
api_key: sk-upstream
general_settings:
master_key: sk-litellm-localcd C:\litellm-proxy
.\.venv\Scripts\Activate.ps1
litellm --config .\config.yaml --detailed_debug$body = @{
model = "my-bridge-model"
input = "只回复:hello from responses bridge"
} | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Uri "http://127.0.0.1:4000/v1/responses" `
-Method Post `
-Headers @{
"Authorization" = "Bearer sk-litellm-local"
"Content-Type" = "application/json"
} `
-Body $body
加载评论中...