Skip to content

Examples

The openresponses-python repository includes several proxy implementations to adapt popular LLM providers to the Open Responses standard.

All examples are located in the examples/ directory.

OpenRouter

Proxies requests to OpenRouter. Perfect for accessing models like DeepSeek R1, Claude 3.5, etc.

Run:

make run-openrouter

Port: 8001 Env: OPENROUTER_API_KEY

OpenAI

Proxies requests to OpenAI.

Run:

make run-openai

Port: 8002 Env: OPENAI_API_KEY

Ollama (Local)

Proxies to a local Ollama instance running on localhost:11434.

Run:

make run-ollama

Port: 8003

LM Studio (Local)

Proxies to local LM Studio running on localhost:1234.

Run:

make run-lmstudio

Port: 8004

HuggingFace (Inference API / TGI)

Proxies to HuggingFace Inference Endpoints or TGI.

Run:

make run-huggingface

Port: 8005 Env: HF_API_KEY (Optional), HF_BASE_URL (Defaults to public API)