Current request format targets OpenAI-compatible /chat/completions.
Sidekick now supports SSE streaming and sends stream: true for providers that require streaming mode.
Sidekick no longer sends temperature by default to avoid strict-provider validation failures.
Sidekick now exposes functions.* and multi_tool_use.parallel tools to the model, executes tool calls locally, and feeds results back into the next model turn.
This is the first-step foundation; next you can add context injection, streaming responses, code actions, and tool calling.