Introduce a new demo for maintaining complete conversation history in the xllm package. This feature allows the accumulation of user and assistant messages across multiple turns, enabling both generate and stream functionalities to utilize the same history. Update the demo catalog to include this new functionality.
Add new features to the xllm package, including the ability to enable a thinking mode and specify reasoning effort for DeepSeek-compatible providers. Update the environment configuration and demo implementations to showcase these features. Enhance the README and documentation to reflect the new functionality and usage examples.
Add multiple demo implementations for the @dm/xllm package, showcasing various functionalities such as streaming, tool invocation, and model switching. Introduce environment variable support for configuration and provide a comprehensive example in the README. Update the index.html for better user experience and localization.
Introduce @dm/xllm with a unified request/response model, streaming AsyncIterable events, and adapter-based support for OpenAI-compatible and DeepSeek backends. Add an example integration that demonstrates provider/model switching via Vite env variables and direct stream output consumption.
Made-with: Cursor
feat(core): enhance event handling with FireEvent class
refactor(core): update index to export fire event bus
chore(core): adjust TypeScript configuration for better path resolution
chore(core): add vitest configuration for core package
chore: create base TypeScript configuration for consistent settings
chore: add node-specific TypeScript configuration
chore: add web-specific TypeScript configuration
fix: include buildin directory in vitest workspace configuration