Together AI Expands DeepSeek-R1 Deployment with Enhanced Serverless APIs and Reasoning Clusters

Fibo Quantum



Felix Pinkston
Feb 13, 2025 11:11

Together AI enhances DeepSeek-R1 deployment with new serverless APIs and reasoning clusters, offering high-speed and scalable solutions for large-scale reasoning model applications.





Together AI has announced significant advancements in the deployment of its DeepSeek-R1 reasoning model, introducing enhanced serverless APIs and dedicated reasoning clusters. This move is aimed at supporting the increasing demand from companies integrating sophisticated reasoning models into their production applications.

Enhanced Serverless APIs

The new Together Serverless API for DeepSeek-R1 is reportedly twice as fast as any other API currently available in the market, enabling low-latency, production-grade inference with seamless scalability. This API is designed to offer companies fast, responsive user experiences and efficient multi-step workflows, crucial for modern applications relying on reasoning models.

Key features of the serverless API include instant scalability without infrastructure management, flexible pay-as-you-go pricing, and enhanced security with hosting in Together AI’s data centers. The OpenAI-compatible APIs further facilitate easy integration into existing applications, offering high rate limits of up to 9000 requests per minute on the scale tier.

Introduction of Together Reasoning Clusters

To complement the serverless solution, Together AI has launched Together Reasoning Clusters, which provide dedicated GPU infrastructure optimized for high-throughput, low-latency inference. These clusters are particularly suited for handling variable, token-heavy reasoning workloads, achieving decoding speeds of up to 110 tokens per second.

The clusters leverage the proprietary Together Inference Engine, which is reported to be 2.5 times faster than open-source engines like SGLang. This efficiency allows for the same throughput with significantly fewer GPUs, reducing infrastructure costs while maintaining high performance.

Scalability and Cost Efficiency

Together AI offers a range of cluster sizes to match different workload demands, with contract-based pricing models ensuring predictable costs. This setup is particularly beneficial for enterprises with high-volume workloads, providing a cost-effective alternative to token-based pricing.

Additionally, the dedicated infrastructure ensures secure, isolated environments within North American data centers, meeting privacy and compliance requirements. With enterprise support and service level agreements guaranteeing 99.9% uptime, Together AI ensures reliable performance for mission-critical applications.

For more information, visit Together AI.

Image source: Shutterstock


Wood Profits Banner>