Returns the optimal LLM for any task based on cost, speed, and quality data across 13 models from 6 providers. Query with a task type, get back a ranked recommendation.
Real-time availability and latency monitoring for LLM provider APIs. Check if an endpoint is up before you call it. Tracks Anthropic, OpenAI, Google, Mistral, DeepSeek, and Meta.
Route LLM calls through GetKin. OpenAI-compatible API format. We forward to the provider, return the response, and take a thin margin. Logging and cost tracking included.
Stateless memory compression for AI agents. Send a raw session dump, get back structured short-term, medium-term, and long-term memory. We never store your data. First 5 compressions free.