feat(provider): add OpenVINO Model Server provider (#2193)
add OpenVINO Model Server provider
This commit is contained in:
76
README.md
76
README.md
@@ -803,6 +803,7 @@ Config file: `~/.nanobot/config.json`
|
||||
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
||||
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
||||
| `ollama` | LLM (local, Ollama) | — |
|
||||
| `ovms` | LLM (local, OpenVINO Model Server) | [docs.openvino.ai](https://docs.openvino.ai/2026/model-server/ovms_docs_llm_quickstart.html) |
|
||||
| `vllm` | LLM (local, any OpenAI-compatible server) | — |
|
||||
| `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` |
|
||||
| `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |
|
||||
@@ -938,6 +939,81 @@ ollama run llama3.2
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>OpenVINO Model Server (local / OpenAI-compatible)</b></summary>
|
||||
|
||||
Run LLMs locally on Intel GPUs using [OpenVINO Model Server](https://docs.openvino.ai/2026/model-server/ovms_docs_llm_quickstart.html). OVMS exposes an OpenAI-compatible API at `/v3`.
|
||||
|
||||
> Requires Docker and an Intel GPU with driver access (`/dev/dri`).
|
||||
|
||||
**1. Pull the model** (example):
|
||||
|
||||
```bash
|
||||
mkdir -p ov/models && cd ov
|
||||
|
||||
docker run -d \
|
||||
--rm \
|
||||
--user $(id -u):$(id -g) \
|
||||
-v $(pwd)/models:/models \
|
||||
openvino/model_server:latest-gpu \
|
||||
--pull \
|
||||
--model_name openai/gpt-oss-20b \
|
||||
--model_repository_path /models \
|
||||
--source_model OpenVINO/gpt-oss-20b-int4-ov \
|
||||
--task text_generation \
|
||||
--tool_parser gptoss \
|
||||
--reasoning_parser gptoss \
|
||||
--enable_prefix_caching true \
|
||||
--target_device GPU
|
||||
```
|
||||
|
||||
> This downloads the model weights. Wait for the container to finish before proceeding.
|
||||
|
||||
**2. Start the server** (example):
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--rm \
|
||||
--name ovms \
|
||||
--user $(id -u):$(id -g) \
|
||||
-p 8000:8000 \
|
||||
-v $(pwd)/models:/models \
|
||||
--device /dev/dri \
|
||||
--group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \
|
||||
openvino/model_server:latest-gpu \
|
||||
--rest_port 8000 \
|
||||
--model_name openai/gpt-oss-20b \
|
||||
--model_repository_path /models \
|
||||
--source_model OpenVINO/gpt-oss-20b-int4-ov \
|
||||
--task text_generation \
|
||||
--tool_parser gptoss \
|
||||
--reasoning_parser gptoss \
|
||||
--enable_prefix_caching true \
|
||||
--target_device GPU
|
||||
```
|
||||
|
||||
**3. Add to config** (partial — merge into `~/.nanobot/config.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"ovms": {
|
||||
"apiBase": "http://localhost:8000/v3"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"provider": "ovms",
|
||||
"model": "openai/gpt-oss-20b"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> OVMS is a local server — no API key required. Supports tool calling (`--tool_parser gptoss`), reasoning (`--reasoning_parser gptoss`), and streaming.
|
||||
> See the [official OVMS docs](https://docs.openvino.ai/2026/model-server/ovms_docs_llm_quickstart.html) for more details.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>vLLM (local / OpenAI-compatible)</b></summary>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user