# UniGateway LLM + Agent Guide UniGateway is an OpenAI-compatible unified AI gateway. Single endpoint, multiple providers/models. Brand positioning: unified AI gateway, OpenAI-compatible multi-provider gateway. Base API URL: https://api.unigateway.ai/v1 Docs Home: https://unigateway.ai/docs Models: https://unigateway.ai/models Pricing: https://unigateway.ai/pricing LLM Full Context: https://unigateway.ai/llms-full.txt ## Published docs index - [概览](https://unigateway.ai/docs/overview) - [快速开始](https://unigateway.ai/docs/quickstart) - [鉴权与请求约定](https://unigateway.ai/docs/authentication) - [聊天补全](https://unigateway.ai/docs/chat-completions) - [流式输出](https://unigateway.ai/docs/streaming) - [模型查询](https://unigateway.ai/docs/models) - [字节(Seedance)](https://unigateway.ai/docs/seedance-overview) - [Seedance / 创建任务](https://unigateway.ai/docs/seedance-create-task) - [Seedance / 查询任务](https://unigateway.ai/docs/seedance-task-query) - [Seedance / 素材库](https://unigateway.ai/docs/seedance-asset-libraries) - [接口兼容矩阵](https://unigateway.ai/docs/endpoint-compatibility) - [模型选择与回退](https://unigateway.ai/docs/model-selection-and-fallback) - [错误处理与重试](https://unigateway.ai/docs/error-handling-and-retries) - [OpenAI SDK 接入](https://unigateway.ai/docs/openai-sdk) - [Dify 接入](https://unigateway.ai/docs/dify) - [OpenWebUI 接入](https://unigateway.ai/docs/openwebui) - [编码工具与 Agent 接入](https://unigateway.ai/docs/coding-tools-and-agents) - [LobeChat 接入](https://unigateway.ai/docs/lobechat) - [n8n 接入](https://unigateway.ai/docs/n8n) - [LangChain 接入](https://unigateway.ai/docs/langchain) - [Cherry Studio 接入](https://unigateway.ai/docs/cherry-studio) - [Flowise 接入](https://unigateway.ai/docs/flowise) - [Continue 接入](https://unigateway.ai/docs/continue) - [Cline 接入](https://unigateway.ai/docs/cline) ## LLM-friendly markdown endpoints - Canonical docs URL: https://unigateway.ai/docs/{slug} - Markdown mirror URL: https://unigateway.ai/docs/{slug}.md ## Agent maintenance protocol (JWT) Use Admin JWT for create/update/delete operations. ### 1) Login and get JWT POST /api/auth/login Content-Type: application/json { "email": "", "password": "" } Read token from response field `token`. Then use header: Authorization: Bearer ### 2) Read documentation - GET /api/docs - GET /api/docs/{slug} ### 3) Manage categories (Admin JWT required) - GET /api/admin/doc-categories - POST /api/admin/doc-categories - PUT /api/admin/doc-categories/{id} - DELETE /api/admin/doc-categories/{id} ### 4) Manage docs (Admin JWT required) - POST /api/docs - PUT /api/docs/{slug} - DELETE /api/docs/{slug} ### 5) Update this llms.txt through API (Admin JWT required) - GET /api/admin/llms-txt - PUT /api/admin/llms-txt PUT payload: { "content": "# UniGateway LLM + Agent Guide ..." } ## Authoring conventions for agents 1) Slug: lowercase + hyphen only (`^[a-z0-9-]+$`). 2) Keep both EN/ZH content synchronized. 3) Keep code examples runnable and OpenAI-compatible. 4) Prefer additive updates; avoid deleting existing docs unless explicitly requested. 5) Keep the first 160 chars concise for metadata extraction. --- # Full Documentation Content ## Getting Started # Overview > Category: Getting Started | Last updated: 2026-04-08 What UniGateway is, who it is for, and the first endpoints customers should know. # UniGateway Overview UniGateway is a unified AI gateway and aggregation platform with API root `https://api.unigateway.ai`. You can access mainstream model families through compatible paths such as `/v1`, `/v1beta`, and other vendor-aligned routes behind one consistent integration layer. Current model families: - OpenAI (GPT) - Anthropic (Claude) - Google (Gemini) This docs pack follows a practical structure for mainstream AI API integrations: - Start with quick integration - Keep endpoint contracts explicit - Use `/v1/models` as the source of truth for model IDs ## Base URLs - API root: `https://api.unigateway.ai` - Common versioned paths: `/v1`, `/v1beta` - Exact path depends on the vendor-compatible interface you are calling ## Core Endpoints - `GET /v1/models` - `POST /v1/chat/completions` Code-confirmed additional endpoint families: - Responses: `/v1/responses` - Embeddings: `/v1/embeddings` - Images: `/v1/images/*` - Audio: `/v1/audio/*` - Rerank: `/v1/rerank` - Claude-format: `/v1/messages` - Gemini-format: `/v1beta/models/*` ## Model ID Rule Always use an exact model ID returned by `GET /v1/models`. Do not hardcode guessed names in production. Example IDs (illustrative): - `gpt-5.2` - `claude-sonnet-4-6` - `gemini-3-pro-preview` ## Request Compatibility UniGateway supports mainstream vendor-compatible API patterns across model families, including unified routes and official-format routes where needed. Supported parameters still vary by model family and provider backend. Recommended integration flow: 1. Query `/v1/models`. 2. Select model by capability and cost. 3. Send requests to `/v1/chat/completions`. 4. Add fallback logic between model families when needed. --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5.2", "messages": [ {"role": "user", "content": "Say hello from UniGateway."} ] }' ``` ### python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) resp = client.chat.completions.create( model="gpt-5.2", messages=[{"role": "user", "content": "Say hello from UniGateway."}], ) print(resp.choices[0].message.content) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const resp = await client.chat.completions.create({ model: "gpt-5.2", messages: [{ role: "user", content: "Say hello from UniGateway." }], }); console.log(resp.choices[0]?.message?.content); ``` # Quickstart > Category: Getting Started | Last updated: 2026-04-08 Create an API key, point your SDK to UniGateway, and make the first successful request. # Quickstart This guide gets you from API key to first successful response. ## Prerequisites 1. Have an active UniGateway account. 2. Create API key from dashboard. 3. Ensure account balance/quota is available. ## 1. Prepare API Key Set your key locally: ```bash export UNIGATEWAY_API_KEY="" ``` ## 2. Confirm Available Models ```bash curl https://api.unigateway.ai/v1/models \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` Pick a model ID from the response. ## 3. Send First Chat Request ```bash curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5.2", "messages": [ {"role": "user", "content": "Write a one-line launch note for UniGateway."} ] }' ``` ## 4. Try Different Model Families Use the same endpoint with different model IDs: - OpenAI: `gpt-5.2` - Claude: `claude-sonnet-4-6` - Gemini: `gemini-3-pro-preview` Then define your fallback chain for live traffic. See `Model Selection and Fallback`. ## Python Example ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) resp = client.chat.completions.create( model="claude-sonnet-4-6", messages=[{"role": "user", "content": "Summarize UniGateway in 3 bullets."}], ) print(resp.choices[0].message.content) ``` ## TypeScript Example ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const resp = await client.chat.completions.create({ model: "gemini-3-pro-preview", messages: [{ role: "user", content: "Give me a short API onboarding checklist." }], }); console.log(resp.choices[0]?.message?.content); ``` --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5.2", "messages": [ {"role": "system", "content": "You are a concise assistant."}, {"role": "user", "content": "Write a 1-line product tagline for UniGateway."} ], "temperature": 0.3 }' ``` ### python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) resp = client.chat.completions.create( model="gpt-5.2", messages=[ {"role": "system", "content": "You are a concise assistant."}, {"role": "user", "content": "Write a 1-line product tagline for UniGateway."}, ], temperature=0.3, ) print(resp.choices[0].message.content) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const resp = await client.chat.completions.create({ model: "gpt-5.2", messages: [ { role: "system", content: "You are a concise assistant." }, { role: "user", content: "Write a 1-line product tagline for UniGateway." } ], temperature: 0.3, }); console.log(resp.choices[0]?.message?.content); ``` # Authentication > Category: Getting Started | Last updated: 2026-04-08 How to send API keys, set headers, and avoid common authentication mistakes. # Authentication UniGateway uses bearer-token authentication for API requests. ## Required Header ```http Authorization: Bearer ``` ## Common Request Shape ```bash curl https://api.unigateway.ai/v1/models \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` For JSON POST requests, also include: ```http Content-Type: application/json ``` ## Header Compatibility Note For most UniGateway calls, `Authorization: Bearer ...` is sufficient. Some compatibility behavior in `/v1/models` can inspect additional compatibility headers (Anthropic/Gemini), but those are optional for standard UniGateway usage. ## Security Practices - Never expose API keys in client-side code. - Keep production and staging keys separate. - Rotate keys when leakage is suspected. - Add server-side rate limiting and retry backoff. ## Common Errors - `401 Unauthorized`: key missing, invalid, or expired. - `403 Forbidden`: key exists but lacks required permission. - `429 Too Many Requests`: request rate or quota exceeded. ## Troubleshooting Checklist 1. Verify `Authorization` header format is exactly `Bearer `. 2. Verify base URL is `https://api.unigateway.ai/v1`. 3. Verify the model ID exists in `GET /v1/models`. --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/models \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` ### python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); ``` ## Core API # Chat Completions > Category: Core API | Last updated: 2026-04-08 Send text prompts through UniGateway's unified chat completions API across mainstream model families. # Chat Completions Use this endpoint for text generation and multi-turn conversation. - Method: `POST` - Path: `/v1/chat/completions` - Base URL: `https://api.unigateway.ai/v1` ## Minimal Request ```json { "model": "gpt-5.2", "messages": [ { "role": "user", "content": "Explain unified AI gateway benefits in 3 bullets." } ] } ``` ## cURL Example ```bash curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "claude-sonnet-4-6", "messages": [ {"role": "system", "content": "You are a concise assistant."}, {"role": "user", "content": "Write a migration note from single-provider to UniGateway."} ], "temperature": 0.3 }' ``` ## Key Parameters | Field | Type | Notes | |---|---|---| | `model` | string | Use exact value from `/v1/models`. | | `messages` | array | Standard chat message array used by mainstream compatible SDKs. | | `temperature` | number | Creativity control; supported range depends on model. | | `max_tokens` | number | Upper bound of generated tokens. | | `stream` | boolean | Enable SSE streaming when `true`. | Some advanced parameters may be model-family specific. Validate by testing with your chosen model ID. ## Basic Response Shape ```json { "id": "chatcmpl-xxx", "object": "chat.completion", "model": "gemini-3-pro-preview", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "..." }, "finish_reason": "stop" } ] } ``` --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5.2", "messages": [ {"role": "user", "content": "Summarize the benefits of a unified AI gateway in 3 bullets."} ], "temperature": 0.2 }' ``` ### python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) resp = client.chat.completions.create( model="gpt-5.2", messages=[{"role": "user", "content": "Summarize the benefits of a unified AI gateway in 3 bullets."}], temperature=0.2, ) print(resp.choices[0].message.content) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const resp = await client.chat.completions.create({ model: "gpt-5.2", messages: [ { role: "user", content: "Summarize the benefits of a unified AI gateway in 3 bullets." } ], temperature: 0.2, }); console.log(resp.choices[0]?.message?.content); ``` # Streaming > Category: Core API | Last updated: 2026-04-08 Enable stream mode for lower latency chat responses and incremental UI rendering. # Streaming Streaming returns incremental output over Server-Sent Events (SSE). - Method: `POST` - Path: `/v1/chat/completions` - Set `"stream": true` ## cURL Example ```bash curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -N \ -d '{ "model": "gemini-3-pro-preview", "stream": true, "messages": [ {"role": "user", "content": "Give me a step-by-step launch checklist."} ] }' ``` ## Event Format Streaming responses follow standard SSE framing: - `data: { ...chunk... }` - `data: [DONE]` ## Python Example ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) stream = client.chat.completions.create( model="claude-sonnet-4-6", stream=True, messages=[{"role": "user", "content": "Explain SSE in one paragraph."}], ) for chunk in stream: delta = chunk.choices[0].delta.content or "" if delta: print(delta, end="", flush=True) ``` ## TypeScript Example ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const stream = await client.chat.completions.create({ model: "gpt-5.2", stream: true, messages: [{ role: "user", content: "Describe streaming UX best practices." }], }); for await (const chunk of stream) { const delta = chunk.choices[0]?.delta?.content ?? ""; if (delta) process.stdout.write(delta); } ``` --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -N \ -d '{ "model": "gpt-5.2", "stream": true, "messages": [ {"role": "user", "content": "Explain SSE in one paragraph."} ] }' ``` ### python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) stream = client.chat.completions.create( model="gpt-5.2", stream=True, messages=[{"role": "user", "content": "Explain SSE in one paragraph."}], ) for chunk in stream: delta = chunk.choices[0].delta.content or "" if delta: print(delta, end="", flush=True) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const stream = await client.chat.completions.create({ model: "gpt-5.2", stream: true, messages: [{ role: "user", content: "Explain SSE in one paragraph." }], }); for await (const chunk of stream) { const delta = chunk.choices[0]?.delta?.content ?? ""; if (delta) process.stdout.write(delta); } ``` # Models > Category: Core API | Last updated: 2026-04-08 List available models and choose stable model IDs before integrating. # Models Use this endpoint to discover currently available models. - Method: `GET` - Path: `/v1/models` - URL: `https://api.unigateway.ai/v1/models` ## cURL ```bash curl https://api.unigateway.ai/v1/models \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` ## Typical Response Shape ```json { "object": "list", "data": [ { "id": "gpt-5.2", "object": "model" }, { "id": "claude-sonnet-4-6", "object": "model" }, { "id": "gemini-3-pro-preview", "object": "model" } ] } ``` ## Format Note `GET /v1/models` defaults to the standard list response used by common SDKs. Specific compatibility headers can trigger Anthropic/Gemini-style output when needed. Use the default list response unless you explicitly need vendor-specific format compatibility. ## What To Read From This Response - `id`: the exact model ID you can pass into requests. - Account-specific availability: treat the live response as source of truth instead of external screenshots or stale examples. - Capability hints: some environments expose richer metadata through UI or adjacent APIs, but the safest integration key remains the exact `id`. ## Scenario-Based Selection Use the live list to choose an exact model ID for each workload. | Scenario | Preferred model family | Why | Operational note | |---|---|---|---| | General chat and agents | GPT / Claude class models | Best balance of instruction following and tool use | Start from a stable non-preview ID when possible | | Balanced production traffic | GPT / Claude / Gemini mid-tier models | Easier fallback across families | Keep request shape conservative | | Low-latency user interactions | Faster variants in your account | Better perceived responsiveness | Validate quality before routing full traffic | | Multilingual generation | GPT / Claude / Gemini general-purpose models | Broader language coverage | Re-test prompts after switching families | | Embeddings or rerank | Endpoint-specific models from live list | Different endpoint contract than chat | Confirm endpoint support before rollout | | Video generation | Separate video surfaces | Uses provider-specific task routes | Do not assume `/v1/chat/completions` models apply | ## Selection Workflow 1. Read `GET /v1/models`. 2. Group candidate IDs by workload: chat, embeddings, multimodal, or provider-specific tasks. 3. Canary one real request per candidate. 4. Pick a primary model plus one or two fallbacks. 5. Re-check model availability on a short cache interval or deployment boundary. ## Model Selection Guidance - Fetch model list at startup or on a short cache interval. - Pin model IDs by business use case. - Configure provider-family fallback for reliability. Recommended fallback chain example: 1. `gpt-5.2` 2. `claude-sonnet-4-6` 3. `gemini-3-pro-preview` ## Integration Tips - Treat model availability as dynamic. - Do not infer support only from the model name; verify with a real request. - Avoid assumptions about unsupported parameters across providers. - Re-validate important flows when switching model families. --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/models \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` ### python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) models = client.models.list() for item in models.data: print(item.id) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const models = await client.models.list(); for (const item of models.data) { console.log(item.id); } ``` ## Video Generation # ByteDance (Seedance) > Category: Video Generation | Last updated: 2026-04-08 Overview of the ByteDance (Seedance) workflow, interface surfaces, and child endpoint docs. # ByteDance (Seedance) ByteDance (Seedance) is organized as its own provider subtree under `Video Generation`. Use this guide to understand the Seedance-specific workflow before integrating the individual endpoints. ## Interface Surfaces | Surface | Base URL | Used For | |---|---|---| | Task APIs | `https://video.unigateway.ai` | Creating and querying asynchronous video generation tasks | | Asset APIs | `https://api.unigateway.ai` | Managing asset groups and reusable assets | ## Recommended Flow 1. Create an asynchronous generation task with `POST /api/v3/contents/generations/tasks`. 2. Poll status and fetch results with `GET /api/v3/contents/generations/tasks/{id}`. 3. Use the asset-library routes when your workflow depends on reusable references, brand assets, or supplier-managed media. ## Interface Layout | Workflow stage | Main endpoint | Purpose | |---|---|---| | Create Task | `POST /api/v3/contents/generations/tasks` | Start an asynchronous generation job | | Query Task | `GET /api/v3/contents/generations/tasks/{id}` | Poll task status and retrieve final output | | Asset Libraries | `POST /api/v3/contents/generations/asset-groups`, `POST /api/v3/contents/generations/assets`, `POST /api/v3/contents/generations/assets/list`, `DELETE /api/v3/contents/generations/assets/{id}` | Manage reusable assets and asset groups | ## Task Lifecycle | Status | Meaning | Recommended action | |---|---|---| | `queued` | Accepted but not yet running | Keep polling with short backoff | | `running` | Generation in progress | Continue polling; do not create duplicate tasks blindly | | `succeeded` | Final output ready | Read output URL and usage, then persist your own job record | | `failed` | Upstream or validation failure | Inspect error payload and decide whether to retry | | `expired` | Result or task window expired | Re-submit a new task if the job is still needed | | `cancelled` | Task terminated before success | Treat as terminal and decide whether to create a new task | ## Rollout Checklist 1. Confirm whether you are using the video surface or the core API surface. 2. Start with a minimal create request and validate one successful query cycle. 3. Add ratio, duration, and audio options only after the basic flow succeeds. 4. Introduce retries and queue backoff before sending production traffic. ## Notes - Seedance uses a task-based asynchronous model rather than a single synchronous generation call. - Task APIs and asset APIs live on different UniGateway surfaces, so check the base URL for each interface before integrating. # Seedance / Create Task > Category: Video Generation | Last updated: 2026-04-08 Create asynchronous video generation tasks with the ByteDance (Seedance) interface. # Seedance / Create Task Use this page to create a ByteDance (Seedance) video generation task. ## Request Surface | Item | Value | |---|---| | Base URL | `https://video.unigateway.ai` | | Auth | `Authorization: Bearer $UNIGATEWAY_API_KEY` | ## Endpoint - Method: `POST` - Path: `/api/v3/contents/generations/tasks` - URL: `https://video.unigateway.ai/api/v3/contents/generations/tasks` ### Minimal Request ```json { "model": "doubao-seedance-2-0-260128", "content": [ { "type": "text", "text": "A cinematic slow-motion shiba dog running on a wet street after rain." } ] } ``` ### Common Optional Fields | Field | Type | Notes | |---|---|---| | `ratio` | string | Aspect ratio, for example `16:9` or `9:16`. | | `duration` | number | Target output duration in seconds. | | `resolution` | string | Resolution tier (model-dependent). | | `generate_audio` | boolean | Whether to generate audio with the video. | | `service_tier` | string | Service tier for routing/billing policies. | ### Validation Notes | Field | Rule | |---|---| | `model` | Use an exact Seedance-capable model ID from your account. | | `content` | Must include at least one supported input item. | | `ratio` / `resolution` | Treat as model-dependent; validate on one canary request first. | | `generate_audio` | Do not assume all model variants support it. | ### cURL Example ```bash curl https://video.unigateway.ai/api/v3/contents/generations/tasks \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "doubao-seedance-2-0-fast-260128", "content": [ {"type": "text", "text": "Cyberpunk downtown at night, fast forward tracking shot."} ], "ratio": "16:9", "duration": 5, "generate_audio": false }' ``` ### Response Example ```json { "id": "media_task_8f3d0a" } ``` ## Notes 1. A successful create call returns a task `id`; the final output is fetched later. 2. Query task state with `GET /api/v3/contents/generations/tasks/{id}` using the returned task `id`. 3. This request shape is specific to the Seedance task model. Do not assume it is interchangeable with future video providers. ## Recommended Handoff To Polling - Wait `2-3s` before the first query in normal traffic. - Increase the interval gradually if the task remains in `queued` or `running`. - Keep your own request trace so duplicate submits can be detected. ## Common Failure Cases | Status | Likely cause | What to do | |---|---|---| | `400` | Invalid field shape or unsupported option | Remove optional fields and retry with the minimal payload | | `401` / `403` | Invalid key or missing permission | Fix credentials or account access before retrying | | `429` | Rate or review limit | Back off and queue requests instead of hammering create | | `5xx` | Upstream or gateway issue | Retry with capped backoff, then decide whether to re-submit | --- ## Code examples ### curl ```curl curl https://video.unigateway.ai/api/v3/contents/generations/tasks \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "doubao-seedance-2-0-fast-260128", "content": [ {"type": "text", "text": "Cyberpunk downtown at night, fast tracking shot."} ], "ratio": "16:9", "duration": 5, "generate_audio": false }' ``` ### python ```python import requests api_key = "" base_url = "https://video.unigateway.ai" headers = { "Authorization": "Bearer " + api_key, "Content-Type": "application/json", } resp = requests.post( base_url + "/api/v3/contents/generations/tasks", headers=headers, json={ "model": "doubao-seedance-2-0-260128", "content": [{"type": "text", "text": "A cinematic city night drive."}], "ratio": "16:9", "duration": 5, }, ) resp.raise_for_status() print(resp.json()) ``` ### typescript ```typescript const baseURL = "https://video.unigateway.ai"; const resp = await fetch(`${baseURL}/api/v3/contents/generations/tasks`, { method: "POST", headers: { Authorization: `Bearer ${process.env.UNIGATEWAY_API_KEY}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "doubao-seedance-2-0-fast-260128", content: [{ type: "text", text: "A cinematic city night drive." }], ratio: "16:9", duration: 5, generate_audio: false, }), }); console.log(await resp.json()); ``` # Seedance / Query Task > Category: Video Generation | Last updated: 2026-04-08 Query asynchronous video generation task results with the ByteDance (Seedance) interface. # Seedance / Query Task Use this page to query a previously created ByteDance (Seedance) video generation task. ## Request Surface | Item | Value | |---|---| | Base URL | `https://video.unigateway.ai` | | Auth | `Authorization: Bearer $UNIGATEWAY_API_KEY` | | Primary endpoint | `GET /api/v3/contents/generations/tasks/{id}` | ## Query Video Task - Method: `GET` - Path: `/api/v3/contents/generations/tasks/{id}` ### cURL Example ```bash curl "https://video.unigateway.ai/api/v3/contents/generations/tasks/media_task_8f3d0a" \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` ### Status Values - `queued` - `running` - `succeeded` - `failed` - `expired` - `cancelled` ### Response Example (Success) ```json { "id": "media_task_8f3d0a", "model": "doubao-seedance-2-0-260128", "status": "succeeded", "content": { "video_url": "https://cdn.example.com/media/output/media_task_8f3d0a.mp4" }, "usage": { "completion_tokens": 108900, "total_tokens": 108900 }, "created_at": 1775344800, "updated_at": 1775344815, "ratio": "16:9", "duration": 5, "generate_audio": false, "error": null } ``` ### Response Example (Failure) ```json { "id": "media_task_8f3d0a", "model": "doubao-seedance-2-0-fast-260128", "status": "failed", "content": {}, "usage": null, "created_at": 1775344800, "updated_at": 1775344810, "error": { "code": "UPSTREAM_ERROR", "message": "Upstream task failed" } } ``` ## Polling Recommendation - Start with a `2-3s` initial delay after task creation. - Poll more slowly as elapsed time grows instead of using a fixed tight loop. - Put an overall timeout in your own job runner so tasks do not poll forever. ## Terminal-State Handling | Status | Handling | |---|---| | `succeeded` | Persist the result URL, usage, and task metadata. | | `failed` | Inspect `error.code` and `error.message`, then decide whether to create a new task. | | `expired` | Treat as terminal and re-submit only if the business operation still matters. | | `cancelled` | Treat as terminal and investigate why the task was interrupted. | ## Notes 1. Query with the exact task `id` returned by the create call. 2. Treat `queued` and `running` as in-progress states. 3. `content.video_url` and `usage` may be empty until the task reaches a terminal state. ## Common Failure Cases | Status | Likely cause | What to do | |---|---|---| | `404` | Wrong task ID or wrong surface | Verify the ID and base URL first | | `429` | Polling too aggressively | Add backoff and reduce concurrent pollers | | `5xx` | Gateway or upstream instability | Retry with capped backoff and keep the same task ID | # Seedance / Asset Libraries > Category: Video Generation | Last updated: 2026-04-08 Manage Seedance-compatible asset groups and assets for reusable video workflows. # Seedance / Asset Libraries UniGateway exposes Seedance-compatible asset-library endpoints on top of its media system. ## Request Surface | Item | Value | |---|---| | Base URL | `https://api.unigateway.ai` | | Auth | `Authorization: Bearer $UNIGATEWAY_API_KEY` | ## Covered Routes - `POST /api/v3/contents/generations/asset-groups` - `POST /api/v3/contents/generations/assets` - `POST /api/v3/contents/generations/assets/list` - `DELETE /api/v3/contents/generations/assets/{id}` - Compatibility aliases: `GET /v1/asset/groups`, `POST /v1/asset/list`, `POST /v1/delete/asset` ## Operational Rules - Keep `ProjectName` / `project_name` consistent within the same workflow. - Treat group ownership checks as part of the public contract, not as an internal detail. - Roll out list and delete operations only after one create-to-list round trip succeeds. ## Create Asset Group - Method: `POST` - Path: `/api/v3/contents/generations/asset-groups` Typical fields: | Field | Type | Notes | |---|---|---| | `Name` | string | Required group name. | | `Description` | string | Optional description. | | `GroupType` | string | Commonly `AIGC`. | | `ProjectName` | string | Optional logical project namespace. | ### Request Example ```json { "Name": "character-references", "Description": "Approved Seedance reusable references", "GroupType": "AIGC", "ProjectName": "default" } ``` ## Create Asset - Method: `POST` - Path: `/api/v3/contents/generations/assets` Typical fields: | Field | Type | Notes | |---|---|---| | `Name` | string | Asset display name. | | `AssetType` | string | For example `Image` or `Video`. | | `URL` | string | Source asset URL. | | `GroupId` | string | Required when binding to an asset group. | Notes: - Aliased lowercase fields such as `name`, `asset_type`, `url`, and `group_id` are accepted. - Polling hints like `PollInterval` and `PollTimeout` are ignored by the public contract. ### Request Example ```json { "Name": "hero-reference-01", "AssetType": "Image", "URL": "https://images.example.com/hero-reference-01.png", "GroupId": "group-20260406234335-4t6cj" } ``` ### Response Example ```json { "asset_id": "asset-upstream-1", "GroupId": "group-20260406234335-4t6cj", "AssetType": "Image", "URL": "https://images.example.com/hero-reference-01.png", "Status": "Creating" } ``` ## List Assets - Method: `POST` - Path: `/api/v3/contents/generations/assets/list` - Compatibility alias: `POST /v1/asset/list` ### Request Example ```json { "Filter": { "GroupIds": ["group-20260406234335-4t6cj"], "GroupType": "AIGC" }, "PageNumber": 1, "PageSize": 20, "SortBy": "CreateTime", "SortOrder": "Desc" } ``` ### Response Example ```json { "Items": [ { "Id": "asset-upstream-1", "Name": "hero-reference-01", "URL": "https://images.example.com/hero-reference-01.png", "GroupId": "group-20260406234335-4t6cj", "AssetType": "Image", "Status": "Processing", "ProjectName": "default", "CreateTime": "2026-04-07T00:00:00.000Z", "UpdateTime": "2026-04-07T00:00:03.000Z" } ], "TotalCount": 1, "PageNumber": 1, "PageSize": 20 } ``` ## Delete Asset - Method: `DELETE` - Path: `/api/v3/contents/generations/assets/{id}` - Compatibility alias: `POST /v1/delete/asset` Optional query/body field: - `ProjectName` or `project_name` ### cURL Example ```bash curl -X DELETE "https://api.unigateway.ai/api/v3/contents/generations/assets/asset-upstream-1?project_name=default" \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` ### Success Response ```json { "success": true, "asset_id": "asset-upstream-1", "response": { "Result": { "Id": "asset-upstream-1" } } } ``` ## Access And Safety Notes 1. Asset creation checks group ownership before forwarding to the supplier. 2. Seedance per-user review limits can trigger `429` responses. 3. Asset list/delete operations reconcile local UniGateway bindings with upstream supplier state. ## Common Failure Cases | Status | Likely cause | What to do | |---|---|---| | `400` | Missing group binding or invalid field casing | Re-check required fields and keep one request shape per client | | `403` | Caller does not own the asset group | Fix account or project scoping | | `404` | Asset or group not found | Verify the ID and project namespace | | `429` | Review or rate limit | Queue writes and slow down polling/list refresh | ## Integration Guides # Endpoint Compatibility Matrix > Category: Integration Guides | Last updated: 2026-04-08 Confirmed UniGateway endpoint support and current not-implemented routes. # Endpoint Compatibility Matrix This page summarizes the main UniGateway endpoint families available today. Availability can vary by account plan, region, or feature rollout. ## Confirmed Core Endpoints Base root for all routes on this page: `https://api.unigateway.ai` Common compatible routes: - `GET /v1/models` - `POST /v1/chat/completions` - `POST /v1/completions` - `POST /v1/responses` - `POST /v1/responses/compact` - `POST /v1/embeddings` - `POST /v1/moderations` Multimodal and other APIs: - `POST /v1/images/generations` - `POST /v1/images/edits` - `POST /v1/audio/transcriptions` - `POST /v1/audio/translations` - `POST /v1/audio/speech` - `POST /v1/rerank` - `GET /v1/realtime` (WebSocket) Additional compatible formats: - `POST /v1/messages` (Claude-style payload) - `GET /v1beta/models` - `POST /v1beta/models/*path` (Gemini-style path) - `POST /v1/engines/:model/embeddings` (Gemini embeddings compatibility) ## Currently Unavailable Routes The following routes are not currently available: - `POST /v1/images/variations` - `GET/POST /v1/files` - `GET/DELETE /v1/files/:id` - `GET /v1/files/:id/content` - `POST/GET /v1/fine-tunes` - `GET /v1/fine-tunes/:id` - `POST /v1/fine-tunes/:id/cancel` - `GET /v1/fine-tunes/:id/events` - `DELETE /v1/models/:model` ## Compatibility Notes - Some older routes remain unavailable. - UniGateway also includes additional compatible formats such as `/v1/messages` and `/v1beta/models/*path`. - `GET /v1/models` can adjust response shape when you use specific compatibility headers. The default remains the standard list format. ## Recommendation For most integrations, start from: 1. `GET /v1/models` 2. `POST /v1/chat/completions` 3. Add other endpoints after confirming they are enabled in your environment. --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/models \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" ``` ### python ```python from openai import OpenAI client = OpenAI(api_key="", base_url="https://api.unigateway.ai/v1") print(client.models.list()) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1" }); console.log(await client.models.list()); ``` # Model Selection and Fallback > Category: Integration Guides | Last updated: 2026-04-08 How to pick model IDs and define fallback chains across OpenAI, Claude, and Gemini. # Model Selection and Fallback Use this guide to run stable production traffic across OpenAI, Claude, and Gemini model families. ## 1) Discover First, Then Pin Always read live model IDs from `GET /v1/models`, then pin IDs by use case. Do not assume model names from external docs are available in your account. ## 2) Build a Fallback Chain by Capability Example text-generation chain: 1. `gpt-5.2` 2. `claude-sonnet-4-6` 3. `gemini-3-pro-preview` Your actual chain should come from the live model list and your latency/cost targets. ## 3) Keep Request Shape Conservative Start with the common `/v1/chat/completions` request shape: ```json { "model": "gpt-5.2", "messages": [ { "role": "user", "content": "Summarize this in 3 bullets." } ], "temperature": 0.2 } ``` Avoid provider-specific optional fields unless you have endpoint-level validation for each fallback target. ## 4) Routing Pattern Recommended policy: - Retry same model on transient failures (`429`, `5xx`) with backoff. - Switch to next model in chain after retry budget is exhausted. - Log `model`, `request_id`, latency, and token usage per attempt. ## 5) Lifecycle and Migration Hygiene From UniGateway model metadata and UI patterns, model lifecycle states matter (`AVAILABLE`, `PREVIEW`, `DEPRECATED`, `SUNSET`, `UNAVAILABLE`). Operational rule: - Do not add new live traffic to `DEPRECATED` or `SUNSET` models. - Keep replacement mappings in config, not in application code. --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model":"gpt-5.2","messages":[{"role":"user","content":"hello"}]}' ``` ### python ```python from openai import OpenAI client = OpenAI(api_key="", base_url="https://api.unigateway.ai/v1") resp = client.chat.completions.create(model="claude-sonnet-4-6", messages=[{"role":"user","content":"hello"}]) print(resp.choices[0].message.content) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1" }); const resp = await client.chat.completions.create({ model: "gemini-3-pro-preview", messages: [{ role: "user", content: "hello" }] }); console.log(resp.choices[0]?.message?.content); ``` # Error Handling and Retries > Category: Integration Guides | Last updated: 2026-04-08 Practical retry and idempotency guidance for production integrations. # Error Handling and Retries This guide provides practical retry guidance for reliable UniGateway integrations. ## Failure-Handling Principles 1. Separate request validation errors from transient capacity errors. 2. Retry only when the failure mode is plausibly temporary. 3. Keep retries bounded per model, per task, and per user action. 4. Log every attempt so fallback behavior remains auditable. ## HTTP Status Handling Suggested policy: | Status | Meaning | Action | |---|---|---| | 400 | Request invalid | Fix payload, do not retry. | | 401 | Auth invalid/missing | Fix key/header, do not retry blindly. | | 402 | Insufficient balance | Trigger billing workflow, pause retries. | | 403 | Forbidden | Check account permissions or access policy. | | 404 | Resource missing | Verify endpoint/path/model ID. | | 429 | Rate limited | Retry with exponential backoff + jitter. | | 5xx | Server/upstream issue | Retry with capped backoff; then fallback model. | ## Endpoint-Family Playbooks ### Chat and Responses APIs - Retry `429` and `5xx` on the same model with capped backoff. - If retries are exhausted, switch to the next fallback model. - Treat streaming and non-stream requests differently in your trace pipeline. ### Streaming APIs - Once a partial stream has been consumed, treat that attempt as user-visible output. - If the stream breaks mid-flight, start a new trace instead of pretending it was the same response. - Alert separately on stream interruption rate; it is not the same failure mode as non-stream JSON errors. ### Model Discovery - Do not retry `404` model usage blindly; refresh `GET /v1/models` first. - Cache model lists briefly, but invalidate on deployment or catalog changes. ### Async Task APIs - Retry create calls conservatively to avoid duplicate jobs. - Poll existing task IDs before re-submitting the same business operation. - Use caller-side correlation IDs even if the public API does not expose first-class idempotency keys. ## Backoff Policy Recommended: - Initial delay: `300ms` - Multiplier: `2x` - Max delay: `8s` - Max attempts per model: `3` After max attempts, route to next fallback model. ## Suggested Retry Budgets | Endpoint family | Retry budget | Fallback action | |---|---|---| | `chat.completions` / `responses` | Up to `3` attempts per model | Switch model after budget is exhausted | | Streaming chat | `1` reconnect attempt only if no user-visible tokens arrived | Start a new request trace | | `GET /v1/models` | Short retry burst | Surface degraded state if discovery keeps failing | | Async task create | `1-2` attempts max | Query or reconcile before creating another task | | Async task poll | Many polls, but low frequency | Stop at your own overall timeout | ## Idempotency Guidance For non-stream text generation: - Safe to retry when previous attempt failed before full response. For streaming: - Treat partial stream as consumed output. - Retry should create a new request trace and append operation metadata. For stateful or asynchronous APIs: - Use your own idempotency key strategy at the caller side when supported by your workflow. ## Escalation Rules - Escalate immediately when `401`, `403`, or `402` rates rise; retries will not fix account state. - Escalate when one provider family fails but another remains healthy; this often indicates a routing or upstream-specific issue. - Escalate when fallback usage spikes unexpectedly; the system may still appear healthy while latency and cost drift. ## Observability Checklist Log these fields per request attempt: - request timestamp - model ID - endpoint path - status code - retry count - latency - provider/upstream error message Recommended additions: - fallback position in chain - streaming vs non-stream flag - task ID or caller correlation ID - final user-visible outcome --- ## Code examples ### curl ```curl curl https://api.unigateway.ai/v1/chat/completions \ -H "Authorization: Bearer $UNIGATEWAY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model":"gpt-5.2","messages":[{"role":"user","content":"retry demo"}]}' ``` ### python ```python from openai import OpenAI client = OpenAI(api_key="", base_url="https://api.unigateway.ai/v1") # implement retry with exponential backoff for 429/5xx ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1" }); // retry 429/5xx with exponential backoff ``` # OpenAI SDK > Category: Integration Guides | Last updated: 2026-04-08 Connect OpenAI-compatible SDKs to UniGateway with a custom base URL and live model discovery. # OpenAI SDK Use UniGateway with OpenAI-compatible SDKs by overriding the base URL and API key. ## Base Configuration - Base URL: `https://api.unigateway.ai/v1` - Auth header: `Authorization: Bearer $UNIGATEWAY_API_KEY` - Model discovery endpoint: `GET /v1/models` ## Python ```python from openai import OpenAI client = OpenAI( api_key="", base_url="https://api.unigateway.ai/v1", ) models = client.models.list() print(models.data[0].id) resp = client.chat.completions.create( model="gpt-5.2", messages=[{"role": "user", "content": "Give me a 3-step rollout checklist."}], ) print(resp.choices[0].message.content) ``` ## TypeScript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1", }); const models = await client.models.list(); console.log(models.data[0]?.id); const resp = await client.chat.completions.create({ model: "claude-sonnet-4-6", messages: [{ role: "user", content: "Summarize the release risks in 3 bullets." }], }); console.log(resp.choices[0]?.message?.content); ``` ## Recommended Onboarding Flow 1. Fetch `GET /v1/models` before choosing a model ID. 2. Start with `chat.completions` or `responses`, not advanced provider-specific options. 3. Enable streaming or tools only after the basic non-stream request succeeds. 4. Add fallback routing at the application layer instead of hardcoding one model forever. ## Common Mistakes | Problem | Fix | |---|---| | Requests still hit the vendor directly | Check that `base_url` / `baseURL` really points to UniGateway | | `404` or `401` on first request | Verify the base URL ends with `/v1` and the bearer token is valid | | One model works but another fails | Re-read `GET /v1/models`; availability is account-specific | | Streaming works differently across models | Treat streaming as a separate compatibility check | --- ## Code examples ### python ```python from openai import OpenAI client = OpenAI(api_key="", base_url="https://api.unigateway.ai/v1") print(client.models.list()) ``` ### typescript ```typescript import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.UNIGATEWAY_API_KEY, baseURL: "https://api.unigateway.ai/v1" }); console.log(await client.models.list()); ``` # Dify > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway as an OpenAI-compatible upstream in Dify. # Dify Use UniGateway in Dify through the OpenAI-compatible model path. ## What To Configure In Dify's OpenAI-compatible model settings, use: - Base URL: `https://api.unigateway.ai/v1` - API Key: your UniGateway key - Model name: one exact ID returned by `GET /v1/models` ## Recommended Setup Order 1. Connect UniGateway as an OpenAI-compatible provider. 2. Sync or manually enter one canary model ID from `GET /v1/models`. 3. Validate a basic chat workflow first. 4. Add more models only after the first route is stable. ## Capability Mapping Guidance | Workload in Dify | Recommendation | |---|---| | Chat apps | Start with standard chat-completions-compatible models | | Agent workflows | Use models you have already verified for tool use | | Embeddings | Enable only after confirming the embedding endpoint and model ID | | Image or audio features | Validate separately; do not assume every model family supports them | ## Common Mistakes | Problem | Fix | |---|---| | Provider connects but inference fails | Re-check the exact model ID from the live model list | | Requests hit `/v1/v1/...` | Remove duplicate `/v1` in the configured base URL | | Some features fail while chat works | Treat each endpoint family as a separate rollout | | Model list in Dify is stale | Refresh or re-sync after model catalog changes | # OpenWebUI > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway as an OpenAI-compatible backend in OpenWebUI. # OpenWebUI Use UniGateway as an OpenAI-compatible connection in OpenWebUI. ## Base Configuration - Endpoint base: `https://api.unigateway.ai/v1` - API key: your UniGateway bearer token - Model IDs: read from `GET /v1/models` ## Recommended Rollout 1. Create one OpenAI-compatible connection pointing at UniGateway. 2. Verify model discovery with a single stable model ID. 3. Test a short chat prompt before enabling longer chats or shared workspaces. 4. Validate streaming and image or audio features separately from text chat. ## Operational Notes | Area | Recommendation | |---|---| | Model refresh | Re-sync after catalog changes rather than trusting a cached list forever | | Shared deployments | Separate staging and production keys | | Advanced features | Verify tools, files, and multimodal routes one by one | | User-facing defaults | Start with one stable default model and expand later | ## Common Mistakes | Problem | Fix | |---|---| | Connection saves but no models appear | Check API key scope and confirm `/v1/models` works directly | | Chat works but streaming is inconsistent | Test streaming as its own compatibility gate | | One workspace behaves differently from another | Check for environment-specific cached config or duplicate connections | # Coding Tools and Agents > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway with coding assistants and agent tools that support OpenAI-compatible base URLs. # Coding Tools and Agents Many coding assistants, CLI tools, and internal agents can use UniGateway if they support an OpenAI-compatible base URL override. ## Minimum Requirements The tool must let you configure: - API key - Base URL - Exact model ID If a tool only supports vendor-native endpoints with no base URL override, UniGateway cannot be dropped in directly. ## Generic Configuration Pattern ```bash export OPENAI_API_KEY="$UNIGATEWAY_API_KEY" export OPENAI_BASE_URL="https://api.unigateway.ai/v1" ``` Then choose a model ID from `GET /v1/models`. ## Recommended Validation Flow 1. Start with one non-stream request. 2. Validate tool calling or agent loops only after plain completion works. 3. Check how the tool handles retries, streaming, and partial failures. 4. Keep a fallback model ready for interactive coding traffic. ## Good Fit vs Poor Fit | Good fit | Poor fit | |---|---| | Tools that already speak OpenAI-compatible chat completions | Tools that hardcode one vendor endpoint | | Systems with configurable env vars or per-provider settings | Systems that require vendor-native auth only | | Internal agents with explicit model routing config | Black-box desktop apps with no base URL setting | ## Operational Notes - Treat interactive coding traffic as latency-sensitive. - Re-test tool use and structured outputs when switching model families. - Keep staging and production credentials separate for developer tools as well. # LobeChat > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway in LobeChat through the OpenAI-compatible provider path. # LobeChat Use UniGateway in LobeChat through the OpenAI-compatible provider path. ## What To Configure LobeChat's provider configuration supports overriding the OpenAI request base URL. - Provider: OpenAI-compatible path - API key: your UniGateway key - Base URL / proxy URL: `https://api.unigateway.ai/v1` - Model list: exact IDs returned by `GET /v1/models` ## Recommended Setup Order 1. Configure the OpenAI provider in LobeChat. 2. Override the provider base URL so requests go to UniGateway instead of the default OpenAI endpoint. 3. Start with one stable model from `GET /v1/models`. 4. Validate normal chat before enabling more models or advanced features. ## Configuration Notes | Area | Guidance | |---|---| | Base URL suffix | Validate whether your deployment expects the `/v1` suffix in the configured proxy URL | | Model visibility | Keep the displayed model list aligned with the live UniGateway catalog | | Shared deployments | Separate staging and production keys | | Advanced features | Validate tools, files, and multimodal routes independently from plain chat | ## Common Failure Cases | Problem | Fix | |---|---| | Chat UI loads but returns empty output | Re-check the configured proxy/base URL, especially the `/v1` suffix | | Models do not appear | Verify the API key and confirm `GET /v1/models` works directly | | One model works and another does not | Treat model availability as account-specific, not globally guaranteed | | Feature parity looks inconsistent | Validate each endpoint family separately instead of assuming all OpenAI-style features are enabled | # n8n > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway in n8n through OpenAI-compatible nodes or the HTTP Request node. # n8n Use UniGateway in n8n through OpenAI-compatible nodes when your n8n version exposes the needed endpoint settings, or fall back to the generic HTTP Request node. ## Integration Options | Path | When to use it | |---|---| | OpenAI / Chat OpenAI / Embeddings OpenAI nodes | Your n8n version and deployment support the OpenAI-compatible settings you need | | HTTP Request node | You need full control over URL, headers, and payload shape | ## Minimum Configuration - API key: your UniGateway bearer token - Base URL: `https://api.unigateway.ai/v1` - Model ID: one exact value from `GET /v1/models` ## Recommended Rollout 1. Start with one simple non-stream chat request. 2. Confirm the credential path or node path you picked actually targets UniGateway. 3. Add embeddings, responses, or image/audio routes only after the first flow is stable. 4. Re-sync model IDs after catalog changes instead of assuming they remain constant. ## Operational Notes | Area | Guidance | |---|---| | Credentials | Keep production and staging keys separate in n8n credentials | | OpenAI node versions | Check node behavior after n8n upgrades; API support evolves over time | | HTTP Request fallback | Use it when a built-in node does not expose the endpoint behavior you need | | Error handling | Route `429` and `5xx` into workflow-level backoff rather than immediate infinite loops | ## Common Failure Cases | Problem | Fix | |---|---| | Node authenticates but requests fail | Verify the exact endpoint URL, model ID, and request family | | Built-in node lacks one capability | Use HTTP Request node for that endpoint family | | Model works in one workflow but not another | Check cached credentials, duplicated nodes, or hardcoded model IDs | | Retries create noisy duplicate runs | Put retry budgets and backoff in the workflow logic | # LangChain > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway with LangChain through the OpenAI-compatible langchain-openai package. # LangChain Use UniGateway with LangChain through the OpenAI-compatible integrations in `langchain-openai`. ## Minimum Configuration - Package: `langchain-openai` - API key: your UniGateway key - Base URL: `https://api.unigateway.ai/v1` - Model: one exact ID from `GET /v1/models` LangChain's OpenAI integrations support configuring the OpenAI API base URL. ## Python Example ```python from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="gpt-5.2", api_key="", base_url="https://api.unigateway.ai/v1", temperature=0, ) print(llm.invoke("Give me a short deployment checklist.").content) ``` ## Environment-Variable Pattern ```bash export OPENAI_API_KEY="$UNIGATEWAY_API_KEY" export OPENAI_API_BASE="https://api.unigateway.ai/v1" ``` Then initialize your LangChain OpenAI-compatible model normally. ## Recommended Rollout 1. Validate plain chat completion first. 2. Add tool use, structured outputs, or agent loops only after the base chat path is stable. 3. Re-test prompts when switching model families. 4. Keep one fallback model ready for long-running agent tasks. ## Common Failure Cases | Problem | Fix | |---|---| | LangChain initializes but requests hit the wrong host | Re-check `base_url` / `OPENAI_API_BASE` | | One prompt works and another becomes unstable | Re-test with a simpler request shape before blaming the model | | Agent runs amplify costs unexpectedly | Add per-run budgets and fallback limits | | Structured outputs behave differently across models | Validate schema-sensitive flows model by model | # Cherry Studio > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway in Cherry Studio through a custom OpenAI-compatible provider configuration. # Cherry Studio Use UniGateway in Cherry Studio through a custom provider or an OpenAI-type provider configuration. ## What To Configure Cherry Studio's custom-provider flow lets you set: - Provider type: `OpenAI` - API key: your UniGateway key - API address / Base URL: `https://api.unigateway.ai/v1` - Model IDs: exact values from `GET /v1/models` ## Recommended Setup Order 1. Open `Settings`. 2. Go to `Model Services`. 3. Add a custom provider with provider type `OpenAI`. 4. Fill in the UniGateway API key and base URL. 5. Manually add one stable model ID from `GET /v1/models`. 6. Validate one normal chat before enabling more models. ## Configuration Notes | Area | Guidance | |---|---| | Provider type | Use the OpenAI-compatible path rather than inventing a custom request shape | | Base URL | Use the UniGateway API root with `/v1` | | Model management | Add model IDs explicitly and re-check them after catalog changes | | Key validation | Use Cherry Studio's built-in key check only as a first pass; still run one real request | ## Common Failure Cases | Problem | Fix | |---|---| | Provider saves but requests fail | Re-check the API address and confirm it includes `/v1` | | Added model does not work | Verify the exact model ID from `GET /v1/models` | | Some models appear in UI but fail at runtime | Treat the UI list as configured state, not proof of live availability | | Multi-model comparison behaves inconsistently | Re-test each model individually before enabling one-question-multi-answer workflows | # Flowise > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway in Flowise through ChatOpenAI with a custom base URL. # Flowise Use UniGateway in Flowise through the ChatOpenAI path with a custom base URL. ## What To Configure Flowise documents that ChatOpenAI supports custom base URL and headers. - Credential API key: your UniGateway key - Base path / base URL: `https://api.unigateway.ai/v1` - Model name: one exact ID from `GET /v1/models` If the built-in ChatOpenAI node does not expose the model ID you need, use `ChatOpenAI Custom`. ## Recommended Setup Order 1. Drag a `ChatOpenAI` node into the flow. 2. Create a new credential with your UniGateway API key. 3. Open `Additional Parameters`. 4. Set the base path to `https://api.unigateway.ai/v1`. 5. Start with one stable model ID and one non-stream chat test. 6. Add images, tools, or custom models only after the base path is confirmed working. ## Operational Notes | Area | Guidance | |---|---| | Base path | Validate the full path once with a direct `/v1/models` request | | ChatOpenAI Custom | Use it when the regular node lags behind your desired model ID | | Image upload | Treat it as a separate compatibility gate from plain text chat | | Flow rollout | Keep one canary chatflow before cloning the config across many flows | ## Common Failure Cases | Problem | Fix | |---|---| | Credential works but inference fails | Re-check the base path and exact model ID | | Standard ChatOpenAI node cannot select your model | Switch to `ChatOpenAI Custom` | | Text works but image upload fails | Validate multimodal support separately | | Retries multiply executions | Add retry budgets at the workflow level instead of stacking node retries blindly | # Continue > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway in Continue through the OpenAI provider with a custom apiBase. # Continue Use UniGateway in Continue through the `openai` provider with a custom `apiBase`. ## Minimum Configuration Continue's official docs show that OpenAI-compatible providers can be configured by setting: - `provider: openai` - `model: ` - `apiBase: https://api.unigateway.ai/v1` - `apiKey: ` ## Example `config.yaml` ```yaml name: UniGateway version: 0.0.1 schema: v1 models: - name: UniGateway Chat provider: openai model: gpt-5.2 apiBase: https://api.unigateway.ai/v1 apiKey: ``` ## Recommended Rollout 1. Start with one chat model only. 2. Confirm the model ID through `GET /v1/models`. 3. Validate chat before enabling edit, autocomplete, or agent-heavy roles. 4. If a model misbehaves with `/responses`, switch to a model that fits your request shape better or adjust the request mode in Continue. ## Operational Notes | Area | Guidance | |---|---| | `apiBase` | Keep the `/v1` suffix in place | | Model roles | Add extra roles only after the chat role is stable | | Legacy completions | Use only when your endpoint behavior requires it | | Request debugging | Test the same model once outside Continue before blaming editor integration | ## Common Failure Cases | Problem | Fix | |---|---| | Continue loads config but the model fails | Re-check `apiBase`, `apiKey`, and the exact model ID | | Autocomplete or edit feels worse than chat | Treat each role as a separate rollout instead of assuming one model fits all | | Endpoint compatibility differs by model | Keep request shapes conservative and validate role by role | | Streaming or reasoning behavior differs | Verify how the chosen model behaves with Continue's request mode | # Cline > Category: Integration Guides | Last updated: 2026-04-08 Use UniGateway in Cline through the OpenAI-compatible provider path. # Cline Use UniGateway in Cline through the OpenAI-compatible provider configuration. ## What To Configure Cline's OpenAI-compatible docs call out three required settings: - Base URL: `https://api.unigateway.ai/v1` - API key: your UniGateway key - Model ID: one exact value from `GET /v1/models` ## Recommended Setup Order 1. Open Cline provider settings. 2. Choose the OpenAI-compatible provider path. 3. Set the UniGateway base URL and API key. 4. Start with one stable model ID from `GET /v1/models`. 5. Validate one normal coding task before enabling broader team rollout. ## Operational Notes | Area | Guidance | |---|---| | Base URL | It should not point at the official OpenAI endpoint; use UniGateway's `/v1` path | | Model IDs | Re-check IDs after catalog changes rather than trusting saved names forever | | Tool-heavy tasks | Validate tool calling and structured outputs separately from plain chat | | Team rollout | Keep one canary configuration before copying the provider across many users | ## Common Failure Cases | Problem | Fix | |---|---| | Provider saves but no model works | Confirm the base URL, API key, and exact model ID with a direct `/v1/models` call | | One coding task succeeds but another is unstable | Validate tool use and long-context behavior separately | | Migration to a new model breaks behavior | Re-test prompts and output assumptions model by model | | Latency spikes under agent loops | Prepare a faster fallback model for interactive coding traffic |