Choose the right model for your use case — from automatic smart routing to high-performance specialized models.

Tokenthon gives you access to OpenAI's latest GPT models with intelligent fallbacks and transparent pricing. Each model is optimized for different scenarios, ensuring you get the best performance and value for your specific needs.


gpt-auto
Smart Routing

Intelligent model selection that automatically chooses the best available model based on:

  • Current service availability
  • System load and demand
  • Request complexity
Fallback behavior:
gpt-5 → gpt-5-mini (when needed)
gpt-5
Premium

Our most advanced model with superior reasoning capabilities and enhanced performance:

  • Advanced reasoning & analysis
  • Higher accuracy & coherence
  • Complex problem solving
Best for:
Complex tasks, research, analysis
gpt-5-mini
Efficient

Optimized for speed and efficiency without compromising quality:

  • Fast response times
  • Cost-effective processing
  • High availability
Best for:
Quick responses, simple tasks

Technical Specifications

Detailed comparison of all available models and their capabilities.
Featuregpt-autogpt-5gpt-5-mini
Model SelectionAutomaticManualManual
Response QualityAdaptivePremiumStandard
Response SpeedVariableStandardFast
AvailabilityHigh DemandHigh
Fallback Support

Use gpt-auto for:

  • Production applications requiring high availability
  • Variable workloads with different complexity levels
  • Maximum reliability with automatic failover
  • Cost optimization with smart model selection

Use gpt-5 for:

  • Complex reasoning and analytical tasks
  • Research and academic applications
  • Code generation with complex logic
  • Creative writing and content creation

Use gpt-5-mini for:

  • Quick responses and real-time applications
  • Simple classification and extraction tasks
  • High-volume processing with budget constraints
  • Chatbots and basic conversational AI

Using Models in Your API Requests

Simple examples of how to specify models in your API calls.

TypeScript Example

typescript
const response = await fetch("https://api.tokenthon.com/api/v1/jobs/messages", {
  method: "POST",
  headers: { 
      "Content-Type": "application/json", 
      "x-api-key": "<YOUR_API_KEY>" 
  },
  body: JSON.stringify({
      model: "gpt-auto",
      messages: [
          {role: "user", content: "Write a bedtime story about a unicorn."}
      ],
      response_format: { format: "text" }
  })
});
const data = await response.json();
console.log(data);

cURL Example

bash
curl -X POST "https://api.tokenthon.com/api/v1/jobs/messages" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <YOUR_API_KEY>" \
  -d '{
      "model": "gpt-auto",
      "messages": [{ 
          "role": "user", 
          "content": "Write a bedtime story about a unicorn." 
      }],
      "response_format": { "format": "text" }
  }'

Available Models

Automatic

"model": "gpt-auto"

Smart routing with fallback
Premium

"model": "gpt-5"

Advanced reasoning
Efficient

"model": "gpt-5-mini"

Fast responses