- Add is_oauth and oauth_provider fields to ProviderSpec
- Update _make_provider() to use registry for OAuth provider detection
- Update get_provider() to support OAuth providers (no API key required)
- Mark OpenAI Codex as OAuth-based provider in registry
This improves the provider registry architecture to support OAuth-based
authentication flows, making it extensible for future OAuth providers.
Benefits:
- OAuth providers are now registry-driven (not hardcoded)
- Extensible design: new OAuth providers only need registry entry
- Backward compatible: existing API key providers unaffected
- Clean separation: OAuth logic centralized in registry
- Add OpenAI Codex ProviderSpec to registry.py
- Add openai_codex config field to ProvidersConfig in schema.py
- Mark Codex as OAuth-based (no API key required)
- Set appropriate default_api_base for Codex API
This integrates the Codex OAuth provider with the refactored
provider registry system introduced in upstream commit 299d8b3.
- Add moonshot to ProvidersConfig schema
- Add MOONSHOT_API_BASE environment variable for custom endpoint
- Handle kimi-k2.5 model temperature restriction (must be 1.0)
- Fix is_vllm detection to exclude moonshot provider
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
LiteLLM expects the 'zai/' provider prefix for Zhipu AI (Z.ai) models,
not 'zhipu/'. This was causing 'LLM Provider NOT provided' errors when
users configured models like 'glm-4.7' without an explicit prefix.
According to LiteLLM docs, the correct format is:
- model='zai/glm-4.7' (correct)
- NOT model='zhipu/glm-4.7' (incorrect)
This fix ensures auto-prefixed models use the correct 'zai/' format.
Fixes: Error when using Zhipu AI models with shorthand names like 'glm-4.7'
- Update configuration schema to include Gemini provider
- Modify API key retrieval priority to include Gemini
- Enhance CLI status command to display Gemini API status
- Update LiteLLMProvider to support Gemini integration