Changelog
WebsocketS API
- Configuration: Added WebSockets API which provides a single persistent connection, enabling continuous communication.
Authentication
- Configuration: Added Authentication which adds security by requiring a valid authorization token for each request.
Grok
- Providers: Added Grok as a new provider.
Vercel SDK
Added Vercel AI SDK. The SDK supports many different AI providers, tools for streaming completions, and more.
Persistent logs
- Logs: AI Gateway now has logs that persist, giving you the flexibility to store them for your preferred duration.
Logpush
- Logs: Securely export logs to an external storage location using Logpush.
Pricing
- Pricing: Added pricing for storing logs persistently.
Evaluations
- Configurations: Use AI Gateway’s Evaluations to make informed decisions on how to optimize your AI application.
Custom costs
- Configuration: AI Gateway now allows you to set custom costs at the request level custom costs to requests, accurately reflect your unique pricing, overriding the default or public model costs.
Mistral AI
- Providers: Added Mistral AI as a new provider.
Google AI Studio
- Providers: Added Google AI Studio as a new provider.
Custom metadata
AI Gateway now supports adding custom metadata to requests, improving tracking and analysis of incoming requests.
Logs
Logs are now available for the last 24 hours.
Custom cache key headers
AI Gateway now supports custom cache key headers.
Access an AI Gateway through a Worker
Workers AI now natively supports AI Gateway.
- Added new endpoints to the REST API.
- LLM Side Channel vulnerability fixed
- Providers: Added Anthropic, Google Vertex, Perplexity as providers.
- Real-time Logs: Logs are now real-time, showing logs for the last hour. If you have a need for persistent logs, please let the team know on Discord. We are building out a persistent logs feature for those who want to store their logs for longer.
- Providers: Azure OpenAI is now supported as a provider!
- Docs: Added Azure OpenAI example.
- Bug Fixes: Errors with costs and tokens should be fixed.
- Logs: Logs will now be limited to the last 24h. If you have a use case that requires more logging, please reach out to the team on Discord.
- Dashboard: Logs now refresh automatically.
- Docs: Fixed Workers AI example in docs and dash.
- Caching: Embedding requests are now cacheable. Rate limit will not apply for cached requests.
- Bug Fixes: Identical requests to different providers are not wrongly served from cache anymore. Streaming now works as expected, including for the Universal endpoint.
- Known Issues: There's currently a bug with costs that we are investigating.