
LiteLLM - Getting Started
LiteLLM will show you the normalized, provider-agnostic version of your request. This is useful for debugging, learning, and understanding how LiteLLM handles different providers and options.
LiteLLM - Getting Started
LiteLLM maps exceptions across all supported providers to the OpenAI exceptions. All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the …
Setting API Keys, Base, Version | liteLLM
Setting API Keys, Base, Version LiteLLM allows you to specify the following: API Key API Base API Version API Type Project Location Token Useful Helper functions: check_valid_key() …
Providers - LiteLLM
LiteLLM supports all AI models from CometAPI. CometAPI provides access to 500+ AI models through a unified API interface, including cutting-edge models like GPT-5, Claude Opus 4.1, and various other …
AI/ML API - liteLLM
Getting started with the AI/ML API is simple. Follow these steps to set up your integration: 1. Get Your API Key To begin, you need an API key. You can obtain yours here: 🔑 Get Your API Key 2. Explore …
OpenAI - liteLLM
We recommend using litellm.responses() / Responses API for the latest OpenAI models (GPT-5, gpt-5-codex, o3-mini, etc.)
/responses | liteLLM
LiteLLM allows you to call non-Responses API models via a bridge to LiteLLM's /chat/completions endpoint. This is useful for calling Anthropic, Gemini and even non-Responses API OpenAI models.
MCP Overview | liteLLM
LiteLLM can automatically convert OpenAPI specifications into MCP servers, allowing you to expose any REST API as MCP tools. This is useful when you have existing APIs with OpenAPI/Swagger …
OpenAI - Response API | liteLLM
import litellm # First, create a response response = litellm.responses( model="openai/o1-pro", input="Tell me a three sentence bedtime story about a unicorn.", max_output_tokens=100 ) # Get …
CLI - Quick Start | liteLLM
Quick Start - LiteLLM Proxy CLI Run the following command to start the litellm proxy