LiteLLM - Getting Started | liteLLM
Created 1/10/2026 at 8:13:30 PM
Translate inputs to provider's endpoints (/chat/completions, /responses, /embeddings, /images, /audio, /batches, and more) Consistent output - same response format regardless of which provider you use Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router Track spend & set budgets per project LiteLLM Proxy Server
codellm
Public