Introduce @dm/xllm with a unified request/response model, streaming AsyncIterable events, and adapter-based support for OpenAI-compatible and DeepSeek backends. Add an example integration that demonstrates provider/model switching via Vite env variables and direct stream output consumption.
Made-with: Cursor