Send a chat message to any supported model with optional streaming
Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
OpenRouter model identifier
"openai/gpt-4"
Array of messages for conversation (supports text, images, audio, and files/PDFs)
Enable streaming response via Server-Sent Events
Controls randomness in generation
0 <= x <= 2Maximum number of tokens to generate
Nucleus sampling parameter
0 <= x <= 1System prompt to guide model behavior
Enable extended reasoning capabilities (appends :thinking to model ID if not already present). Not every model supports this, only works with Anthropic models with the :thinking suffix.