ChatGPT API curl Example (2026)

curl is the fastest way to verify the OpenAI Chat Completions API — no SDK install, no language runtime, no surprises. Below are working recipes for chat, streaming, JSON mode, function calling, and vision input, all against /v1/chat/completions.

The minimum viable request is one POST with Authorization: Bearer $KEY, a model, and a messages array. Everything else is optional.

Setup: one env var

# Use the official OpenAI endpoint
export OPENAI_API_KEY="sk-..."
export BASE_URL="https://api.openai.com"

# Or use a unified proxy (works for Claude + ChatGPT + Gemini with one key)
export OPENAI_API_KEY="your_proxy_key"
export BASE_URL="https://tokenprovider.store"

1. Basic chat completion

curl -sS "$BASE_URL/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "system", "content": "You are concise."},
      {"role": "user",   "content": "What is the capital of Japan?"}
    ]
  }'

Response is a single JSON object with choices[0].message.content.

2. Streaming (Server-Sent Events)

curl -sN "$BASE_URL/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "stream": true,
    "messages": [{"role":"user","content":"Count from 1 to 10 slowly."}]
  }'

Response is a stream of data: {…} lines terminated by data: [DONE]. The -N flag disables curl buffering. Pipe through jq to see deltas:

... | grep -oP 'data: \K\{.*\}' | jq -r '.choices[0].delta.content // empty'

3. JSON mode (structured output)

curl -sS "$BASE_URL/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "response_format": {"type": "json_object"},
    "messages": [
      {"role":"system","content":"Reply only with JSON."},
      {"role":"user","content":"Three programming languages with their year, as JSON."}
    ]
  }'

The model is constrained to emit a single valid JSON document. Always include the word "json" somewhere in the system or user prompt — the API requires it when response_format is set.

4. Function / tool calling

curl -sS "$BASE_URL/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role":"user","content":"Weather in Tokyo today?"}],
    "tools": [{
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {"type": "string"},
            "unit": {"type": "string", "enum": ["c","f"]}
          },
          "required": ["city"]
        }
      }
    }],
    "tool_choice": "auto"
  }'

Response will include choices[0].message.tool_calls[] with the chosen function name and JSON-serialized arguments. Execute the tool yourself, then send a second request with role tool containing the result.

5. Vision (image input)

curl -sS "$BASE_URL/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{
      "role": "user",
      "content": [
        {"type": "text", "text": "Describe this image in one sentence."},
        {"type": "image_url", "image_url": {"url": "https://example.com/cat.jpg"}}
      ]
    }]
  }'

For local files: base64-encode and pass as "url": "data:image/jpeg;base64,...".

6. Useful headers

HeaderWhy
OpenAI-Organization: org_xxxCharge a specific org if your key has multiple
OpenAI-Beta: assistants=v2Opt into beta endpoints (Assistants API, etc.)
x-request-id: my-trace-idEchoed back in response headers — useful for debugging

Common errors

StatusMeaningFix
401Bad or missing keyCheck echo $OPENAI_API_KEY
429Rate limit / quotaBack off or top up balance
400 model_not_foundWrong model nameUse gpt-4o-mini, gpt-4o, etc.
413Payload too largeTrim conversation history

Switching to a unified proxy

TokenProvider implements the same /v1/chat/completions contract. Same curl, same JSON, same headers — change BASE_URL only. The proxy also accepts the Anthropic and Gemini protocols on the same key, so one bash export gives you all three vendors. Setup: 5-min setup guide.

One key, three vendors, ~50% off

Same curl examples work — just change the base URL. $1 minimum top-up.

Sign up free → Already a member