Model Reference
Models available through OpenRouter when using the OpenRouter provider in WebLLM.
Using Models
Section titled “Using Models”WebLLM automatically routes to the best available model based on user provider configuration:
import { generateText } from 'webllm';
const result = await generateText({ prompt: 'Explain quantum computing'});With the OpenRouter provider, specify models directly in the extension configuration using the model IDs below.
Browse Models
Section titled “Browse Models”330 models found
Prime Intellect: INTELLECT-3
INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It offe...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000011
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
TNG: R1T Chimera (free)
TNG-R1T-Chimera is an experimental LLM with a faible for creative storytelling and character interaction. It is a derivate of the original TNG/DeepSeek-R1T-Chimera released in April 2025 and is availa...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
TNG: R1T Chimera
TNG-R1T-Chimera is an experimental LLM with a faible for creative storytelling and character interaction. It is a derivate of the original TNG/DeepSeek-R1T-Chimera released in April 2025 and is availa...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude Opus 4.5
Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competit...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: file, image, text
- Output: text
Pricing (per token)
- Prompt: $0.000005
- Completion: $0.000025
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 64,000 tokens
- Moderated: Yes
Supported Parameters
Bert-Nebulon Alpha
This is a cloaked model provided to the community to gather feedback. A general-purpose multimodal model (text/image in, text out) designed for reliability, long-context comprehension, and adaptive lo...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 256,000 tokens
- Moderated: No
Supported Parameters
AllenAI: Olmo 3 32B Think
Olmo 3 32B Think is a large-scale, 32-billion-parameter model purpose-built for deep reasoning, complex logic chains and advanced instruction-following scenarios. Its capacity enables strong performan...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.00000055
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
AllenAI: Olmo 3 7B Instruct
Olmo 3 7B Instruct is a supervised instruction-fine-tuned variant of the Olmo 3 7B base model, optimized for instruction-following, question-answering, and natural conversational dialogue. By leveragi...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000002
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
AllenAI: Olmo 3 7B Think
Olmo 3 7B Think is a research-oriented language model in the Olmo family designed for advanced reasoning and instruction-driven tasks. It excels at multi-step problem solving, logical inference, and m...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000012
- Completion: $0.0000002
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
Google: Nano Banana Pro (Gemini 3 Pro Image Preview)
Nano Banana Pro is Google’s most advanced image-generation and editing model, built on Gemini 3 Pro. It extends the original Nano Banana with significantly improved multimodal reasoning, real-world gr...
View Details
Architecture
- Modality: text+image->text+image
- Tokenizer: Gemini
- Input: image, text
- Output: image, text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000012
- Image: $0.067
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
xAI: Grok 4.1 Fast (free)
Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research. 2M context window. Reasoning can be enabled/disabled using the `rea...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Grok
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 2,000,000 tokens
- Max Completion: 30,000 tokens
- Moderated: No
Supported Parameters
Google: Gemini 3 Pro Preview
Gemini 3 Pro is Google’s flagship frontier model for high-precision multimodal reasoning, combining strong performance across text, image, video, audio, and code with a 1M-token context window. Reason...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000012
- Image: $0.008256
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
Deep Cogito: Cogito v2.1 671B
Cogito v2.1 671B MoE represents one of the strongest open models globally, matching performance of frontier closed and open models. This model is trained using self play with reinforcement learning to...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00000125
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-5.1
GPT-5.1 is the latest frontier-grade model in the GPT-5 series, offering stronger general-purpose reasoning, improved instruction adherence, and a more natural conversational style compared to GPT-5. ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-5.1 Chat
GPT-5.1 Chat (AKA Instant is the fast, lightweight member of the 5.1 family, optimized for low-latency chat while retaining strong general intelligence. It uses adaptive reasoning to selectively “thin...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: file, image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-5.1-Codex
GPT-5.1-Codex is a specialized version of GPT-5.1 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of c...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-5.1-Codex-Mini
GPT-5.1-Codex-Mini is a smaller and faster version of GPT-5.1-Codex
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000025
- Completion: $0.000002
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Kwaipilot: KAT-Coder-Pro V1 (free)
KAT-Coder-Pro V1 is KwaiKAT's most advanced agentic coding model in the KAT-Coder series. Designed specifically for agentic coding tasks, it excels in real-world software engineering scenarios, achiev...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
MoonshotAI: Kimi Linear 48B A3B Instruct
Kimi Linear is a hybrid linear attention architecture that outperforms traditional full attention methods across various contexts, including short, long, and reinforcement learning (RL) scaling regime...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000005
- Completion: $0.0000006
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 1,048,576 tokens
- Moderated: No
Supported Parameters
MoonshotAI: Kimi K2 Thinking
Kimi K2 Thinking is Moonshot AI’s most advanced open reasoning model to date, extending the K2 series into agentic, long-horizon reasoning. Built on the trillion-parameter Mixture-of-Experts (MoE) arc...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000045
- Completion: $0.00000235
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Amazon: Nova Premier 1.0
Amazon Nova Premier is the most capable of Amazon’s multimodal models for complex reasoning tasks and for use as the best teacher for distilling custom models.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Nova
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.0000125
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 32,000 tokens
- Moderated: Yes
Supported Parameters
Perplexity: Sonar Pro Search
Exclusively available on the OpenRouter API, Sonar Pro's new Pro Search mode is Perplexity's most advanced agentic search system. It is designed for deeper reasoning and analysis. Pricing is based on ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 8,000 tokens
- Moderated: No
Supported Parameters
Mistral: Voxtral Small 24B 2507
Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance. It excels at speech transcription, translati...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text, audio
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000003
Capabilities
- Context Length: 32,000 tokens
- Moderated: No
Supported Parameters
OpenAI: gpt-oss-safeguard-20b
gpt-oss-safeguard-20b is a safety reasoning model from OpenAI built upon gpt-oss-20b. This open-weight, 21B-parameter Mixture-of-Experts (MoE) model offers lower latency for safety tasks like content ...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000075
- Completion: $0.0000003
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
NVIDIA: Nemotron Nano 12B 2 VL (free)
NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, c...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text, video
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 128,000 tokens
- Moderated: No
Supported Parameters
NVIDIA: Nemotron Nano 12B 2 VL
NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, c...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text, video
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000006
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
MiniMax: MiniMax M2
MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000255
- Completion: $0.00000102
Capabilities
- Context Length: 204,800 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
LiquidAI/LFM2-8B-A1B
Model created via inbox interface
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000001
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
LiquidAI/LFM2-2.6B
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000001
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
IBM: Granite 4.0 Micro
Granite-4.0-H-Micro is a 3B parameter from the Granite 4 family of models. These models are the latest in a series of models released by IBM. They are fine-tuned for long context tool calling.
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000017
- Completion: $0.00000011
Capabilities
- Context Length: 131,000 tokens
- Moderated: No
Supported Parameters
Deep Cogito: Cogito V2 Preview Llama 405B
Cogito v2 405B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. It represents a significant step toward frontier intelligence with dense arc...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000035
- Completion: $0.0000035
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-5 Image Mini
GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by [GPT-5 Mini](https://openrouter.ai/openai/gpt-5-mini), with GPT Image 1 Mini for efficient image generation. This natively...
View Details
Architecture
- Modality: text+image->text+image
- Tokenizer: GPT
- Input: file, image, text
- Output: image, text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.000002
- Image: $0.0000025
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
Anthropic: Claude Haiku 4.5
Claude Haiku 4.5 is Anthropic’s fastest and most efficient model, delivering near-frontier intelligence at a fraction of the cost and latency of larger Claude models. Matching Claude Sonnet 4’s perfor...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000005
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 64,000 tokens
- Moderated: Yes
Supported Parameters
Qwen: Qwen3 VL 8B Thinking
Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen3
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000018
- Completion: $0.0000021
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 VL 8B Instruct
Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen3
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.000000064
- Completion: $0.0000004
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-5 Image
[GPT-5](https://openrouter.ai/openai/gpt-5) Image combines OpenAI's most advanced language model with state-of-the-art image generation capabilities. It offers major improvements in reasoning, code qu...
View Details
Architecture
- Modality: text+image->text+image
- Tokenizer: GPT
- Input: image, text, file
- Output: image, text
Pricing (per token)
- Prompt: $0.00001
- Completion: $0.00001
- Image: $0.00001
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: o3 Deep Research
o3-deep-research is OpenAI's advanced model for deep research, designed to tackle complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.00001
- Completion: $0.00004
- Image: $0.00765
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: o4 Mini Deep Research
o4-mini-deep-research is OpenAI's faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds addi...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: file, image, text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000008
- Image: $0.00153
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
NVIDIA: Llama 3.3 Nemotron Super 49B V1.5
Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Meta’s Llama-3.3-70B-Instruct with a 128K context. It’s post-trained for agentic workflows (RAG,...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000004
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Baidu: ERNIE 4.5 21B A3B Thinking
ERNIE-4.5-21B-A3B-Thinking is Baidu's upgraded lightweight MoE model, refined to boost reasoning depth and quality for top-tier performance in logical puzzles, math, science, coding, text generation, ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000056
- Completion: $0.000000224
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Flash Image (Nano Banana)
Gemini 2.5 Flash Image, a.k.a. "Nano Banana," is now generally available. It is a state of the art image generation model with contextual understanding. It is capable of image generation, edits, and m...
View Details
Architecture
- Modality: text+image->text+image
- Tokenizer: Gemini
- Input: image, text
- Output: image, text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000025
- Image: $0.001238
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 VL 30B A3B Thinking
Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning in STEM, math, and complex ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen3
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000016
- Completion: $0.0000008
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 VL 30B A3B Instruct
Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general mu...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen3
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.0000006
Capabilities
- Context Length: 262,144 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-5 Pro
GPT-5 Pro is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex tasks that require step-by-step reasoning, instructi...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.000015
- Completion: $0.00012
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
Z.AI: GLM 4.6
Compared with GLM-4.5, this generation brings several key improvements: Longer context window: The context window has been expanded from 128K to 200K tokens, enabling the model to handle more complex...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.00000175
Capabilities
- Context Length: 202,752 tokens
- Max Completion: 202,752 tokens
- Moderated: No
Supported Parameters
Z.AI: GLM 4.6 (exacto)
Compared with GLM-4.5, this generation brings several key improvements: Longer context window: The context window has been expanded from 128K to 200K tokens, enabling the model to handle more complex...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000044
- Completion: $0.00000176
Capabilities
- Context Length: 204,800 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude Sonnet 4.5
Claude Sonnet 4.5 is Anthropic’s most advanced Sonnet model to date, optimized for real-world agents and coding workflows. It delivers state-of-the-art performance on coding benchmarks such as SWE-ben...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 64,000 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek V3.2 Exp
DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grai...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000216
- Completion: $0.000000328
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
TheDrummer: Cydonia 24B V4.1
Uncensored and creative writing model based on Mistral Small 3.2 24B with good recall, prompt adherence, and intelligence.
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000005
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Relace: Relace Apply 3
Relace Apply 3 is a specialized code-patching LLM that merges AI-suggested edits straight into your source files. It can apply updates from GPT-4o, Claude, and others into your files at 10,000 tokens/...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000085
- Completion: $0.00000125
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 128,000 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Flash Preview 09-2025
Gemini 2.5 Flash Preview September 2025 Checkpoint is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes bu...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: image, file, text, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000025
- Image: $0.001238
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Flash Lite Preview 09-2025
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000004
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 VL 235B A22B Thinking
Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STE...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen3
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 VL 235B A22B Instruct
Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen3
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000021
- Completion: $0.0000019
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Max
Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the Janua...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000012
- Completion: $0.000006
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Coder Plus
Qwen3 Coder Plus is Alibaba's proprietary version of the Open Source Qwen3 Coder 480B A35B. It is a powerful coding agent model specializing in autonomous programming via tool calling and environment ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000005
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-5 Codex
GPT-5-Codex is a specialized version of GPT-5 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of compl...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
DeepSeek: DeepSeek V3.1 Terminus (exacto)
DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language cons...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000216
- Completion: $0.0000008
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek V3.1 Terminus
DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language cons...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000216
- Completion: $0.0000008
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
xAI: Grok 4 Fast
Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning. Read more about the model on xAI's [news pos...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Grok
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000005
Capabilities
- Context Length: 2,000,000 tokens
- Max Completion: 30,000 tokens
- Moderated: No
Supported Parameters
Tongyi DeepResearch 30B A3B (free)
Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Tongyi DeepResearch 30B A3B
Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000009
- Completion: $0.0000004
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Coder Flash
Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling and en...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000015
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
OpenGVLab: InternVL3 78B
The InternVL3 series is an advanced multimodal large language model (MLLM). Compared to InternVL 2.5, InternVL3 demonstrates stronger multimodal perception and reasoning capabilities. In addition, I...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000007
- Completion: $0.00000026
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Next 80B A3B Thinking
Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step problems; math proofs, code s...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000012
- Completion: $0.0000012
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Next 80B A3B Instruct
Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000008
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
Meituan: LongCat Flash Chat (free)
LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-conn...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Meituan: LongCat Flash Chat
LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-conn...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.00000075
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen Plus 0728
Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.0000012
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen Plus 0728 (thinking)
Qwen Plus 0728, based on the Qwen3 foundation model, is a 1 million context hybrid reasoning model with a balanced performance, speed, and cost combination.
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.000004
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
NVIDIA: Nemotron Nano 9B V2 (free)
NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and t...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
NVIDIA: Nemotron Nano 9B V2
NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and t...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000004
- Completion: $0.00000016
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
MoonshotAI: Kimi K2 0905
Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000039
- Completion: $0.0000019
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
MoonshotAI: Kimi K2 0905 (exacto)
Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000006
- Completion: $0.0000025
Capabilities
- Context Length: 262,144 tokens
- Moderated: No
Supported Parameters
Deep Cogito: Cogito V2 Preview Llama 70B
Cogito v2 70B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. Built with iterative policy improvement, it delivers strong performance acros...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000088
- Completion: $0.00000088
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Cogito V2 Preview Llama 109B
An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Llama4
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000018
- Completion: $0.00000059
Capabilities
- Context Length: 32,767 tokens
- Moderated: No
Supported Parameters
Deep Cogito: Cogito V2 Preview Deepseek 671B
Cogito v2 is a multilingual, instruction-tuned Mixture of Experts (MoE) large language model with 671 billion parameters. It supports both standard and reasoning-based generation modes. The model intr...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00000125
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
StepFun: Step3
Step3 is a cutting-edge multimodal reasoning model—built on a Mixture-of-Experts architecture with 321B total parameters and 38B active. It is designed end-to-end to minimize decoding costs while deli...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000057
- Completion: $0.00000142
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 30B A3B Thinking 2507
Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking m...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000051
- Completion: $0.00000034
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
xAI: Grok Code Fast 1
Grok Code Fast 1 is a speedy and economical reasoning model that excels at agentic coding. With reasoning traces visible in the response, developers can steer Grok Code for high-quality work flows.
View Details
Architecture
- Modality: text->text
- Tokenizer: Grok
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000015
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 10,000 tokens
- Moderated: No
Supported Parameters
Nous: Hermes 4 70B
Hermes 4 70B is a hybrid reasoning model from Nous Research, built on Meta-Llama-3.1-70B. It introduces the same hybrid mode as the larger 405B release, allowing the model to either respond directly o...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000011
- Completion: $0.00000038
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Nous: Hermes 4 405B
Hermes 4 is a large-scale reasoning model built on Meta-Llama-3.1-405B and released by Nous Research. It introduces a hybrid reasoning mode, where the model can choose to deliberate internally with <t...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Flash Image Preview (Nano Banana)
Gemini 2.5 Flash Image Preview, a.k.a. "Nano Banana," is a state of the art image generation model with contextual understanding. It is capable of image generation, edits, and multi-turn conversations...
View Details
Architecture
- Modality: text+image->text+image
- Tokenizer: Gemini
- Input: image, text
- Output: image, text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000025
- Image: $0.001238
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek V3.1
DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase ...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000008
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4o Audio
The gpt-4o-audio-preview model adds support for audio inputs as prompts. This enhancement allows the model to detect nuances within audio recordings and add depth to generated user experiences. Audio ...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: audio, text
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
Mistral: Mistral Medium 3.1
Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced opera...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.000002
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Baidu: ERNIE 4.5 21B A3B
A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneo...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000056
- Completion: $0.000000224
Capabilities
- Context Length: 120,000 tokens
- Max Completion: 8,000 tokens
- Moderated: No
Supported Parameters
Baidu: ERNIE 4.5 VL 28B A3B
A powerful multimodal Mixture-of-Experts chat model featuring 28B total parameters with 3B activated per token, delivering exceptional text and vision understanding through its innovative heterogeneou...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000000112
- Completion: $0.000000448
Capabilities
- Context Length: 30,000 tokens
- Max Completion: 8,000 tokens
- Moderated: No
Supported Parameters
Z.AI: GLM 4.5V
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000048
- Completion: $0.00000144
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
AI21: Jamba Mini 1.7
Jamba Mini 1.7 is a compact and efficient member of the Jamba open model family, incorporating key improvements in grounding and instruction-following while maintaining the benefits of the SSM-Transfo...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000004
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 4,096 tokens
- Moderated: No
Supported Parameters
AI21: Jamba Large 1.7
Jamba Large 1.7 is the latest model in the Jamba open family, offering improvements in grounding, instruction-following, and overall efficiency. Built on a hybrid SSM-Transformer architecture with a 2...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000008
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 4,096 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-5 Chat
GPT-5 Chat is designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: file, image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-5
GPT-5 is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex tasks that require step-by-step reasoning, instruction f...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-5 Mini
GPT-5 Mini is a compact version of GPT-5, designed to handle lighter-weight reasoning tasks. It provides the same instruction-following and safety-tuning benefits as GPT-5, but with reduced latency an...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.00000025
- Completion: $0.000002
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-5 Nano
GPT-5-Nano is the smallest and fastest variant in the GPT-5 system, optimized for developer tools, rapid interactions, and ultra-low latency environments. While limited in reasoning depth compared to ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000004
Capabilities
- Context Length: 400,000 tokens
- Max Completion: 128,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: gpt-oss-120b (exacto)
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B par...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000004
- Completion: $0.0000002
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
OpenAI: gpt-oss-120b
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B par...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000004
- Completion: $0.0000002
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
OpenAI: gpt-oss-20b (free)
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimiz...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
OpenAI: gpt-oss-20b
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimiz...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000014
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude Opus 4.1
Claude Opus 4.1 is an updated version of Anthropic’s flagship model, offering improved performance in coding, reasoning, and agentic tasks. It achieves 74.5% on SWE-bench Verified and shows notable ga...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.000015
- Completion: $0.000075
- Image: $0.024
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 32,000 tokens
- Moderated: No
Supported Parameters
Mistral: Codestral 2508
Mistral's cutting-edge language model for coding released end of July 2025. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test genera...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000009
Capabilities
- Context Length: 256,000 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Coder 30B A3B Instruct
Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, an...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000025
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 30B A3B Instruct 2507
Qwen3-30B-A3B-Instruct-2507 is a 30.5B-parameter mixture-of-experts language model from Qwen, with 3.3B active parameters per inference. It operates in non-thinking mode and is designed for high-quali...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000008
- Completion: $0.00000033
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
Z.AI: GLM 4.5
GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GL...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000035
- Completion: $0.00000155
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Z.AI: GLM 4.5 Air (free)
GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts (MoE) architecture but wit...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Z.AI: GLM 4.5 Air
GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts (MoE) architecture but wit...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000104
- Completion: $0.00000068
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 98,304 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 235B A22B Thinking 2507
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000011
- Completion: $0.0000006
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
Z.AI: GLM 4 32B
GLM 4 32B is a cost-effective foundation language model. It can efficiently perform complex tasks and has significantly enhanced capabilities in tool use, online search, and code-related intelligent ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000001
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Coder 480B A35B (free)
Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-con...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 262,000 tokens
- Max Completion: 262,000 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Coder 480B A35B
Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-con...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000022
- Completion: $0.00000095
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 Coder 480B A35B (exacto)
Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-con...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000038
- Completion: $0.00000153
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 262,144 tokens
- Moderated: No
Supported Parameters
ByteDance: UI-TARS 7B
UI-TARS-1.5 is a multimodal vision-language agent optimized for GUI-based environments, including desktop interfaces, web browsers, mobile systems, and games. Built by ByteDance, it builds upon the UI...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000002
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 2,048 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Flash Lite
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000004
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,535 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 235B A22B Instruct 2507
Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000072
- Completion: $0.000000464
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Switchpoint Router
Switchpoint AI's router instantly analyzes your request and directs it to the optimal AI from an ever-evolving library. As the world of LLMs advances, our router gets smarter, ensuring you always be...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000085
- Completion: $0.0000034
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
MoonshotAI: Kimi K2 0711 (free)
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for a...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 32,768 tokens
- Moderated: Yes
Supported Parameters
MoonshotAI: Kimi K2 0711
Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for a...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000456
- Completion: $0.00000184
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
THUDM: GLM 4.1V 9B Thinking
GLM-4.1V-9B-Thinking is a 9B parameter vision-language model developed by THUDM, based on the GLM-4-9B foundation. It introduces a reasoning-centric "thinking paradigm" enhanced with reinforcement lea...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.000000028
- Completion: $0.0000001104
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 8,000 tokens
- Moderated: No
Supported Parameters
Mistral: Devstral Medium
Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI. Positioned as a step up from Devstral Small, it achieves 61.6% on SW...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.000002
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Mistral: Devstral Small 1.1
Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and relea...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000007
- Completion: $0.00000028
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Venice: Uncensored (free)
Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model is designed as an “unc...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
xAI: Grok 4
Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not exposed, reasoning ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Grok
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
Capabilities
- Context Length: 256,000 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3n 2B (free)
Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 8,192 tokens
- Max Completion: 2,048 tokens
- Moderated: No
Supported Parameters
Tencent: Hunyuan A13B Instruct
Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers compe...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000014
- Completion: $0.00000057
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
TNG: DeepSeek R1T2 Chimera (free)
DeepSeek-TNG-R1T2-Chimera is the second-generation Chimera model from TNG Tech. It is a 671 B-parameter mixture-of-experts text-generation model assembled from DeepSeek-AI’s R1-0528, R1, and V3-0324 c...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
TNG: DeepSeek R1T2 Chimera
DeepSeek-TNG-R1T2-Chimera is the second-generation Chimera model from TNG Tech. It is a 671 B-parameter mixture-of-experts text-generation model assembled from DeepSeek-AI’s R1-0528, R1, and V3-0324 c...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
Morph: Morph V3 Large
Morph's high-accuracy apply model for complex code edits. ~4,500 tokens/sec with 98% accuracy for precise code transformations. The model requires the prompt to be in the following format: <instruct...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000009
- Completion: $0.0000019
Capabilities
- Context Length: 262,144 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Morph: Morph V3 Fast
Morph's fastest apply model for code edits. ~10,500 tokens/sec with 96% accuracy for rapid code transformations. The model requires the prompt to be in the following format: <instruction>{instructio...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.0000012
Capabilities
- Context Length: 81,920 tokens
- Max Completion: 38,000 tokens
- Moderated: No
Supported Parameters
Baidu: ERNIE 4.5 VL 424B A47B
ERNIE-4.5-VL-424B-A47B is a multimodal Mixture-of-Experts (MoE) model from Baidu’s ERNIE 4.5 series, featuring 424B total parameters with 47B active per token. It is trained jointly on text and image ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.000000336
- Completion: $0.000001
Capabilities
- Context Length: 123,000 tokens
- Max Completion: 16,000 tokens
- Moderated: No
Supported Parameters
Baidu: ERNIE 4.5 300B A47B
ERNIE-4.5-300B-A47B is a 300B parameter Mixture-of-Experts (MoE) language model developed by Baidu as part of the ERNIE 4.5 series. It activates 47B parameters per token and supports text generation i...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000224
- Completion: $0.00000088
Capabilities
- Context Length: 123,000 tokens
- Max Completion: 12,000 tokens
- Moderated: No
Supported Parameters
TheDrummer: Anubis 70B V1.1
TheDrummer's Anubis v1.1 is an unaligned, creative Llama 3.3 70B model focused on providing character-driven roleplay & stories. It excels at gritty, visceral prose, unique character adherence, and co...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000075
- Completion: $0.000001
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Inception: Mercury
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Clau...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000025
- Completion: $0.000001
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral Small 3.2 24B
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000018
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
MiniMax: MiniMax M1
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.0000022
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 40,000 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Flash
Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, en...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: file, image, text, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000025
- Image: $0.001238
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,535 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Pro
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through respo...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
- Image: $0.00516
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
MoonshotAI: Kimi Dev 72B
Kimi-Dev-72B is an open-source large language model fine-tuned for software engineering and issue resolution tasks. Based on Qwen2.5-72B, it is optimized using large-scale reinforcement learning that ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000029
- Completion: $0.00000115
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
OpenAI: o3 Pro
The o-series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o3-pro model uses more compute to think harder and provide consistently be...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, file, image
- Output: text
Pricing (per token)
- Prompt: $0.00002
- Completion: $0.00008
- Image: $0.0153
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
xAI: Grok 3 Mini
A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.
View Details
Architecture
- Modality: text->text
- Tokenizer: Grok
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000005
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
xAI: Grok 3
Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, hea...
View Details
Architecture
- Modality: text->text
- Tokenizer: Grok
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Mistral: Magistral Small 2506
Magistral Small is a 24B parameter instruction-tuned model based on Mistral-Small-3.1 (2503), enhanced through supervised fine-tuning on traces from Magistral Medium and further refined via reinforcem...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000005
- Completion: $0.0000015
Capabilities
- Context Length: 40,000 tokens
- Max Completion: 40,000 tokens
- Moderated: No
Supported Parameters
Mistral: Magistral Medium 2506 (thinking)
Magistral is Mistral's first reasoning model. It is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financia...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000005
Capabilities
- Context Length: 40,960 tokens
- Max Completion: 40,000 tokens
- Moderated: No
Supported Parameters
Mistral: Magistral Medium 2506
Magistral is Mistral's first reasoning model. It is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financia...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000005
Capabilities
- Context Length: 40,960 tokens
- Max Completion: 40,000 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Pro Preview 06-05
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through respo...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: file, image, text, audio
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
- Image: $0.00516
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,536 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek R1 0528 Qwen3 8B
DeepSeek-R1-0528 is a lightly upgraded release of DeepSeek R1 that taps more compute and smarter post-training tricks, pushing its reasoning and inference to the brink of flagship models like O3 and G...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000002
- Completion: $0.0000001
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
DeepSeek: R1 0528
May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in siz...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000045
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude Opus 4
Claude Opus 4 is benchmarked as the world’s best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.000015
- Completion: $0.000075
- Image: $0.024
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 32,000 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude Sonnet 4
Claude Sonnet 4 significantly enhances the capabilities of its predecessor, Sonnet 3.7, excelling in both coding and reasoning tasks with improved precision and controllability. Achieving state-of-the...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
- Image: $0.0048
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 64,000 tokens
- Moderated: No
Supported Parameters
Mistral: Devstral Small 2505
Devstral-Small-2505 is a 24B parameter agentic LLM fine-tuned from Mistral-Small-3.1, jointly developed by Mistral AI and All Hands AI for advanced software engineering tasks. It is optimized for code...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000012
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3n 4B (free)
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enab...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 8,192 tokens
- Max Completion: 2,048 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3n 4B
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enab...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000002
- Completion: $0.00000004
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
OpenAI: Codex Mini
codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI. For direct use in the API, we recommend starting with gpt-4.1.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.0000015
- Completion: $0.000006
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Nous: DeepHermes 3 Mistral 24B Preview
DeepHermes 3 (Mistral 24B Preview) is an instruction-tuned language model by Nous Research based on Mistral-Small-24B, designed for chat, function calling, and advanced multi-turn reasoning. It introd...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000002
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral Medium 3
Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.000002
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.5 Pro Preview 05-06
Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through respo...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.00000125
- Completion: $0.00001
- Image: $0.00516
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 65,535 tokens
- Moderated: No
Supported Parameters
Arcee AI: Spotlight
Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers a 32 k‑token context window, enabling rich ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000018
- Completion: $0.00000018
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 65,537 tokens
- Moderated: No
Supported Parameters
Arcee AI: Maestro Reasoning
Maestro Reasoning is Arcee's flagship analysis model: a 32 B‑parameter derivative of Qwen 2.5‑32 B tuned with DPO and chain‑of‑thought RL for step‑by‑step logic. Compared to the earlier 7 B preview, t...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000009
- Completion: $0.0000033
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,000 tokens
- Moderated: No
Supported Parameters
Arcee AI: Virtuoso Large
Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k cont...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000075
- Completion: $0.0000012
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 64,000 tokens
- Moderated: No
Supported Parameters
Arcee AI: Coder Large
Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context win...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000005
- Completion: $0.0000008
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Microsoft: Phi 4 Reasoning Plus
Phi-4-reasoning-plus is an enhanced 14B parameter model from Microsoft, fine-tuned from Phi-4 with additional reinforcement learning to boost accuracy on math, science, and code reasoning tasks. It us...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000007
- Completion: $0.00000035
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Inception: Mercury Coder
Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haik...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000025
- Completion: $0.000001
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 4B (free)
Qwen3-4B is a 4 billion parameter dense language model from the Qwen3 series, designed to support both general-purpose and reasoning-intensive tasks. It introduces a dual-mode architecture—thinking an...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 40,960 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek Prover V2
DeepSeek Prover V2 is a 671B parameter model, speculated to be geared towards logic and mathematics. Likely an upgrade from [DeepSeek-Prover-V1.5](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000005
- Completion: $0.00000218
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
Meta: Llama Guard 4 12B
Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text
- Output: text
Pricing (per token)
- Prompt: $0.00000018
- Completion: $0.00000018
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 30B A3B
Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tas...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000022
Capabilities
- Context Length: 40,960 tokens
- Max Completion: 40,960 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 8B
Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode f...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000028
- Completion: $0.0000001104
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 20,000 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 14B
Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode f...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.00000022
Capabilities
- Context Length: 40,960 tokens
- Max Completion: 40,960 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 32B
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000008
- Completion: $0.00000024
Capabilities
- Context Length: 40,960 tokens
- Max Completion: 40,960 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 235B A22B (free)
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex r...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen3 235B A22B
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex r...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000018
- Completion: $0.00000054
Capabilities
- Context Length: 40,960 tokens
- Max Completion: 40,960 tokens
- Moderated: No
Supported Parameters
TNG: DeepSeek R1T Chimera (free)
DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Tr...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
TNG: DeepSeek R1T Chimera
DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Tr...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
Microsoft: MAI DS R1
MAI-DS-R1 is a post-trained variant of DeepSeek-R1 developed by the Microsoft AI team to improve the model’s responsiveness on previously blocked topics while enhancing its safety profile. Built on to...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
OpenAI: o4 Mini High
OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high. OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.0000011
- Completion: $0.0000044
- Image: $0.0008415
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: o3
o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following. Use...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000008
- Image: $0.00153
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: o4 Mini
OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonst...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.0000011
- Completion: $0.0000044
- Image: $0.0008415
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Qwen: Qwen2.5 Coder 7B Instruct
Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000009
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4.1
GPT-4.1 is a flagship large language model optimized for advanced instruction following, real-world software engineering, and long-context reasoning. It supports a 1 million token context window and o...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000008
Capabilities
- Context Length: 1,047,576 tokens
- Max Completion: 32,768 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4.1 Mini
GPT-4.1 Mini is a mid-sized model delivering performance competitive with GPT-4o at substantially lower latency and cost. It retains a 1 million token context window and scores 45.1% on hard instructi...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.0000016
Capabilities
- Context Length: 1,047,576 tokens
- Max Completion: 32,768 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4.1 Nano
For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, an...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: image, text, file
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000004
Capabilities
- Context Length: 1,047,576 tokens
- Max Completion: 32,768 tokens
- Moderated: Yes
Supported Parameters
EleutherAI: Llemma 7b
Llemma 7B is a language model for mathematics. It was initialized with Code Llama 7B weights, and trained on the Proof-Pile-2 for 200B tokens. Llemma models are particularly strong at chain-of-thought...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.0000012
Capabilities
- Context Length: 4,096 tokens
- Max Completion: 4,096 tokens
- Moderated: No
Supported Parameters
AlfredPros: CodeLLaMa 7B Instruct Solidity
A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library.
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.0000012
Capabilities
- Context Length: 4,096 tokens
- Max Completion: 4,096 tokens
- Moderated: No
Supported Parameters
ArliAI: QwQ 32B RpR v1
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000011
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
xAI: Grok 3 Mini Beta
Grok 3 Mini is a lightweight, smaller thinking model. Unlike traditional models that generate answers immediately, Grok 3 Mini thinks before responding. It’s ideal for reasoning-heavy tasks that don’t...
View Details
Architecture
- Modality: text->text
- Tokenizer: Grok
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000005
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
xAI: Grok 3 Beta
Grok 3 is the latest model from xAI. It's their flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, hea...
View Details
Architecture
- Modality: text->text
- Tokenizer: Grok
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
NVIDIA: Llama 3.1 Nemotron Ultra 253B v1
Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Me...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000006
- Completion: $0.0000018
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Meta: Llama 4 Maverick
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forw...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Llama4
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000000136
- Completion: $0.00000068
- Image: $0.0006684
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Meta: Llama 4 Scout
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Llama4
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000008
- Completion: $0.0000003
- Image: $0.0003342
Capabilities
- Context Length: 327,680 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen2.5 VL 32B Instruct
Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It exce...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.00000022
Capabilities
- Context Length: 16,384 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek V3 0324
DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) mo...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.00000088
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
OpenAI: o1-pro
The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide consistently b...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.00015
- Completion: $0.0006
- Image: $0.21675
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Mistral: Mistral Small 3.1 24B (free)
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral Small 3.1 24B
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000011
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
AllenAI: Olmo 2 32B Instruct
OLMo-2 32B Instruct is a supervised instruction-finetuned variant of the OLMo-2 32B March 2025 base model. It excels in complex reasoning and instruction-following tasks across diverse benchmarks such...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000002
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Google: Gemma 3 4B (free)
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3 4B
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000001703012
- Completion: $0.0000000681536
Capabilities
- Context Length: 96,000 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3 12B (free)
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3 12B
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.0000001
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
Cohere: Command A
Command A is an open-weights 111B parameter model with a 256k context window focused on delivering great performance across agentic, multilingual, and coding use cases. Compared to other leading propr...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
Capabilities
- Context Length: 256,000 tokens
- Max Completion: 8,192 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4o-mini Search Preview
GPT-4o mini Search Preview is a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.0000006
- Image: $0.000217
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4o Search Preview
GPT-4o Search Previewis a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
- Image: $0.003613
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
Google: Gemma 3 27B (free)
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Google: Gemma 3 27B
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000007
- Completion: $0.0000005
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
TheDrummer: Skyfall 36B V2
Skyfall 36B v2 is an enhanced iteration of Mistral Small 2501, specifically fine-tuned for improved creativity, nuanced writing, role-playing, and coherent storytelling.
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000005
- Completion: $0.0000008
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Microsoft: Phi 4 Multimodal Instruct
Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000001
- Image: $0.00017685
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Perplexity: Sonar Reasoning Pro
Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro) Sonar Reason...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000008
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Perplexity: Sonar Pro
Note: Sonar Pro pricing includes Perplexity search pricing. See [details here](https://docs.perplexity.ai/guides/pricing#detailed-pricing-breakdown-for-sonar-reasoning-pro-and-sonar-pro) For enterpri...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 8,000 tokens
- Moderated: No
Supported Parameters
Perplexity: Sonar Deep Research
Sonar Deep Research is a research-focused model designed for multi-step retrieval, synthesis, and reasoning across complex topics. It autonomously searches, reads, and evaluates sources, refining its ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000008
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Qwen: QwQ 32B
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in d...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.0000004
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Google: Gemini 2.0 Flash Lite
Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemin...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.000000075
- Completion: $0.0000003
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude 3.7 Sonnet (thinking)
Claude 3.7 Sonnet is an advanced large language model with improved reasoning, coding, and problem-solving capabilities. It introduces a hybrid reasoning approach, allowing users to choose between rap...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
- Image: $0.0048
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 64,000 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude 3.7 Sonnet
Claude 3.7 Sonnet is an advanced large language model with improved reasoning, coding, and problem-solving capabilities. It introduces a hybrid reasoning approach, allowing users to choose between rap...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
- Image: $0.0048
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 64,000 tokens
- Moderated: No
Supported Parameters
Mistral: Saba
Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000006
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Llama Guard 3 8B
Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classificati...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000002
- Completion: $0.00000006
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
OpenAI: o3 Mini High
OpenAI o3-mini-high is the same model as [o3-mini](/openai/o3-mini) with reasoning_effort set to high. o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly exc...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text, file
- Output: text
Pricing (per token)
- Prompt: $0.0000011
- Completion: $0.0000044
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Google: Gemini 2.0 Flash
Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image, file, audio, video
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000004
- Image: $0.0000258
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen VL Plus
Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixe...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000021
- Completion: $0.00000063
- Image: $0.0002688
Capabilities
- Context Length: 7,500 tokens
- Max Completion: 1,500 tokens
- Moderated: No
Supported Parameters
AionLabs: Aion-1.0
Aion-1.0 is a multi-model system designed for high performance across various tasks, including reasoning and coding. It is built on DeepSeek-R1, augmented with additional models and techniques such as...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000004
- Completion: $0.000008
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
AionLabs: Aion-1.0-Mini
Aion-1.0-Mini 32B parameter model is a distilled version of the DeepSeek-R1 model, designed for strong performance in reasoning domains such as mathematics, coding, and logic. It is a modified variant...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000007
- Completion: $0.0000014
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
AionLabs: Aion-RP 1.0 (8B)
Aion-RP-Llama-3.1-8B ranks the highest in the character evaluation portion of the RPBench-Auto benchmark, a roleplaying-specific variant of Arena-Hard-Auto, where LLMs evaluate each other’s responses....
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000002
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen VL Max
Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.0000032
- Image: $0.001024
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen-Turbo
Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks.
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.0000002
Capabilities
- Context Length: 1,000,000 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen2.5 VL 72B Instruct
Qwen2.5-VL is proficient in recognizing common objects such as flowers, birds, fish, and insects. It is also highly capable of analyzing texts, charts, icons, graphics, and layouts within images.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000013
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen-Plus
Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination.
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.0000012
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen-Max
Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 2...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000016
- Completion: $0.0000064
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
OpenAI: o3 Mini
OpenAI o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding. This model supports the `reasoning_effort` parameter,...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text, file
- Output: text
Pricing (per token)
- Prompt: $0.0000011
- Completion: $0.0000044
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Mistral: Mistral Small 3
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tune...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000005
- Completion: $0.00000008
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
DeepSeek: R1 Distill Qwen 32B
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperfor...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000024
- Completion: $0.00000024
Capabilities
- Context Length: 64,000 tokens
- Max Completion: 32,000 tokens
- Moderated: No
Supported Parameters
DeepSeek: R1 Distill Qwen 14B
DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/de...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000012
- Completion: $0.00000012
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Perplexity: Sonar Reasoning
Sonar Reasoning is a reasoning model provided by Perplexity based on [DeepSeek R1](/deepseek/deepseek-r1). It allows developers to utilize long chain of thought with built-in web search. Sonar Reason...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000005
Capabilities
- Context Length: 127,000 tokens
- Moderated: No
Supported Parameters
Perplexity: Sonar
Sonar is lightweight, affordable, fast, and simple to use — now featuring citations and the ability to customize sources. It is designed for companies seeking to integrate lightweight question-and-ans...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000001
Capabilities
- Context Length: 127,072 tokens
- Moderated: No
Supported Parameters
DeepSeek: R1 Distill Llama 70B
DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The mo...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000013
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 131,072 tokens
- Moderated: No
Supported Parameters
DeepSeek: R1
DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully ...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 163,840 tokens
- Moderated: No
Supported Parameters
MiniMax: MiniMax-01
MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can han...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Other
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000011
Capabilities
- Context Length: 1,000,192 tokens
- Max Completion: 1,000,192 tokens
- Moderated: No
Supported Parameters
Mistral: Codestral 2501
[Mistral](/mistralai)'s cutting-edge language model for coding. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Lear...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000009
Capabilities
- Context Length: 256,000 tokens
- Moderated: No
Supported Parameters
Microsoft: Phi 4
[Microsoft Research](/microsoft) Phi-4 is designed to perform well in complex reasoning tasks and can operate efficiently in situations with limited memory or where quick responses are needed. At 14...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000014
Capabilities
- Context Length: 16,384 tokens
- Moderated: No
Supported Parameters
Sao10K: Llama 3.1 70B Hanami x1
This is [Sao10K](/sao10k)'s experiment over [Euryale v2.2](/sao10k/l3.1-euryale-70b).
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000003
Capabilities
- Context Length: 16,000 tokens
- Moderated: No
Supported Parameters
DeepSeek: DeepSeek V3
DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported ev...
View Details
Architecture
- Modality: text->text
- Tokenizer: DeepSeek
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000012
Capabilities
- Context Length: 163,840 tokens
- Max Completion: 163,840 tokens
- Moderated: No
Supported Parameters
Sao10K: Llama 3.3 Euryale 70B
Euryale L3.3 70B is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.2](/models/sao10k/l3-euryale-70b).
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000065
- Completion: $0.00000075
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
OpenAI: o1
The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series is trained with large-scale reinforcement learning to reason using ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000015
- Completion: $0.00006
- Image: $0.021675
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 100,000 tokens
- Moderated: Yes
Supported Parameters
Cohere: Command R7B (12-2024)
Command R7B (12-2024) is a small, fast update of the Command R+ model, delivered in December 2024. It excels at RAG, tool use, agents, and similar tasks requiring complex reasoning and multiple steps....
View Details
Architecture
- Modality: text->text
- Tokenizer: Cohere
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000000375
- Completion: $0.00000015
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,000 tokens
- Moderated: Yes
Supported Parameters
Google: Gemini 2.0 Flash Experimental (free)
Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Gemini
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 1,048,576 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.3 70B Instruct (free)
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimize...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.3 70B Instruct
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimize...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000104
- Completion: $0.000000312
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 120,000 tokens
- Moderated: No
Supported Parameters
Amazon: Nova Lite 1.0
Amazon Nova Lite 1.0 is a very low-cost multimodal model from Amazon that focused on fast processing of image, video, and text inputs to generate text output. Amazon Nova Lite can handle real-time cus...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Nova
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000024
- Image: $0.00009
Capabilities
- Context Length: 300,000 tokens
- Max Completion: 5,120 tokens
- Moderated: Yes
Supported Parameters
Amazon: Nova Micro 1.0
Amazon Nova Micro 1.0 is a text-only model that delivers the lowest latency responses in the Amazon Nova family of models at a very low cost. With a context length of 128K tokens and optimized for spe...
View Details
Architecture
- Modality: text->text
- Tokenizer: Nova
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000035
- Completion: $0.00000014
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 5,120 tokens
- Moderated: Yes
Supported Parameters
Amazon: Nova Pro 1.0
Amazon Nova Pro 1.0 is a capable multimodal model from Amazon focused on providing a combination of accuracy, speed, and cost for a wide range of tasks. As of December 2024, it achieves state-of-the-a...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Nova
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.0000032
- Image: $0.0012
Capabilities
- Context Length: 300,000 tokens
- Max Completion: 5,120 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4o (2024-11-20)
The 2024-11-20 version of GPT-4o offers a leveled-up creative writing ability with more natural, engaging, and tailored writing to improve relevance & readability. It’s also better at working with upl...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
- Image: $0.003613
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
Mistral Large 2411
Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411) It provides a significant upgrade on the pr...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000006
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Mistral Large 2407
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch annou...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000006
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Mistral: Pixtral Large 2411
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411). The model is able to understand documents, charts and natural images....
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000006
- Image: $0.002888
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Qwen2.5 Coder 32B Instruct
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvem...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000011
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
SorcererLM 8x22B
SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on [WizardLM-2 8x22B](/microsoft/wizardlm-2-8x22b). - Advanced reasoning and emotional intelligence for...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000045
- Completion: $0.0000045
Capabilities
- Context Length: 16,000 tokens
- Moderated: No
Supported Parameters
TheDrummer: UnslopNemo 12B
UnslopNemo v4.1 is the latest addition from the creator of Rocinante, designed for adventure writing and role-play scenarios.
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.0000004
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude 3.5 Haiku (2024-10-22)
Claude 3.5 Haiku features enhancements across all skill sets including coding, tool use, and reasoning. As the fastest model in the Anthropic lineup, it offers rapid response times suitable for applic...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.000004
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Anthropic: Claude 3.5 Haiku
Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for d...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000008
- Completion: $0.000004
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 8,192 tokens
- Moderated: Yes
Supported Parameters
Anthropic: Claude 3.5 Sonnet
New Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at: - Coding: Scores ~49% on SWE-Bench Verified, higher...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000015
- Image: $0.0048
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 8,192 tokens
- Moderated: Yes
Supported Parameters
Magnum v4 72B
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://openrouter.ai/anthropic/claude-3.5-sonnet) and Opus(https://openrouter.ai/anthrop...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000005
Capabilities
- Context Length: 16,384 tokens
- Max Completion: 2,048 tokens
- Moderated: No
Supported Parameters
Mistral: Ministral 8B
Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k contex...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000001
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Mistral: Ministral 3B
Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on mos...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000004
- Completion: $0.00000004
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Qwen: Qwen2.5 7B Instruct
Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000004
- Completion: $0.0000001
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
NVIDIA: Llama 3.1 Nemotron 70B Instruct
NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Leveraging [Llama 3.1 70B](/models/meta-llama/llama-3.1-70b-instruct) architecture and Reinfor...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000012
- Completion: $0.0000012
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Inflection: Inflection 3 Productivity
Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news. For emotiona...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
Capabilities
- Context Length: 8,000 tokens
- Max Completion: 1,024 tokens
- Moderated: No
Supported Parameters
Inflection: Inflection 3 Pi
Inflection 3 Pi powers Inflection's [Pi](https://pi.ai) chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like custo...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
Capabilities
- Context Length: 8,000 tokens
- Max Completion: 1,024 tokens
- Moderated: No
Supported Parameters
TheDrummer: Rocinante 12B
Rocinante 12B is designed for engaging storytelling and rich prose. Early testers have reported: - Expanded vocabulary with unique and expressive word choices - Enhanced creativity for vivid narrativ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000017
- Completion: $0.00000043
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.2 1B Instruct
Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000027
- Completion: $0.0000002
Capabilities
- Context Length: 60,000 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.2 90B Vision Instruct
The Llama 90B Vision model is a top-tier, 90-billion-parameter multimodal model designed for the most challenging visual reasoning and language tasks. It offers unparalleled accuracy in image captioni...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Llama3
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000035
- Completion: $0.0000004
- Image: $0.0005058
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.2 3B Instruct (free)
Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.2 3B Instruct
Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000002
- Completion: $0.00000002
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.2 11B Vision Instruct
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answe...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Llama3
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000000049
- Completion: $0.000000049
- Image: $0.00007948
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Qwen2.5 72B Instruct
Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding an...
View Details
Architecture
- Modality: text->text
- Tokenizer: Qwen
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000007
- Completion: $0.00000026
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
NeverSleep: Lumimaid v0.2 8B
Lumimaid v0.2 8B is a finetune of [Llama 3.1 8B](/models/meta-llama/llama-3.1-8b-instruct) with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of this...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000009
- Completion: $0.0000006
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Mistral: Pixtral 12B
The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Mistral
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000001
- Image: $0.0001445
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Cohere: Command R+ (08-2024)
command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Cohere
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,000 tokens
- Moderated: Yes
Supported Parameters
Cohere: Command R (08-2024)
command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Cohere
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.0000006
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,000 tokens
- Moderated: Yes
Supported Parameters
Qwen: Qwen2.5-VL 7B Instruct
Qwen2.5 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements: - SoTA understanding of images of various resolution & ratio: Qwen2.5-VL achieves state-of-the-art performanc...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Qwen
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000002
- Image: $0.0001445
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Sao10K: Llama 3.1 Euryale 70B v2.2
Euryale L3.1 70B v2.2 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.1](/models/sao10k/l3-euryale-70b).
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000065
- Completion: $0.00000075
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Microsoft: Phi-3.5 Mini 128K Instruct
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000001
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Nous: Hermes 3 70B Instruct
Hermes 3 is a generalist language model with many improvements over [Hermes 2](/models/nousresearch/nous-hermes-2-mistral-7b-dpo), including advanced agentic capabilities, much better roleplaying, rea...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000003
Capabilities
- Context Length: 65,536 tokens
- Moderated: No
Supported Parameters
Nous: Hermes 3 405B Instruct (free)
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Nous: Hermes 3 405B Instruct
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000001
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
OpenAI: ChatGPT-4o
OpenAI ChatGPT 4o is continually updated by OpenAI to point to the current version of GPT-4o used by ChatGPT. It therefore differs slightly from the API version of [GPT-4o](/models/openai/gpt-4o) in t...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000005
- Completion: $0.000015
- Image: $0.007225
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
Sao10K: Llama 3 8B Lunaris
Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge. Creat...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000004
- Completion: $0.00000005
Capabilities
- Context Length: 8,192 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4o (2024-08-06)
The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more [here](https://openai.com/index/introducin...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
- Image: $0.003613
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.1 405B (base)
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This is the base 405B pre-trained version. It has demonstrated strong performance compared to leading closed-sourc...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000004
- Completion: $0.000004
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 32,768 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.1 8B Instruct
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to leading c...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000002
- Completion: $0.00000003
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.1 405B Instruct
The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs. Meta's latest class ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000035
- Completion: $0.0000035
Capabilities
- Context Length: 130,815 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3.1 70B Instruct
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong perfor...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000004
- Completion: $0.0000004
Capabilities
- Context Length: 131,072 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral Nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chin...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000002
- Completion: $0.00000004
Capabilities
- Context Length: 131,072 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4o-mini
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.0000006
- Image: $0.000217
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4o-mini (2024-07-18)
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more ...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.00000015
- Completion: $0.0000006
- Image: $0.007225
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
Google: Gemma 2 27B
Gemma 2 27B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini). Gemma models are well-suited for a variety of text generation ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Gemini
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000065
- Completion: $0.00000065
Capabilities
- Context Length: 8,192 tokens
- Moderated: No
Supported Parameters
Google: Gemma 2 9B
Gemma 2 9B by Google is an advanced, open-source language model that sets a new standard for efficiency and performance in its size class. Designed for a wide variety of tasks, it empowers developers...
View Details
Architecture
- Modality: text->text
- Tokenizer: Gemini
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000009
Capabilities
- Context Length: 8,192 tokens
- Moderated: No
Supported Parameters
Sao10k: Llama 3 Euryale 70B v2.1
Euryale 70B v2.1 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). - Better prompt adherence. - Better anatomy / spatial awareness. - Adapts much better to unique and c...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000148
- Completion: $0.00000148
Capabilities
- Context Length: 8,192 tokens
- Max Completion: 8,192 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral 7B Instruct (free)
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to be the latest v...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0
- Completion: $0
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral 7B Instruct
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to be the latest v...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000028
- Completion: $0.000000054
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
NousResearch: Hermes 2 Pro - Llama-3 8B
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mod...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000000025
- Completion: $0.00000008
Capabilities
- Context Length: 8,192 tokens
- Max Completion: 2,048 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral 7B Instruct v0.3
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of [Mistral 7B Instruct v0.2](/models/mistralai/mistral-7b-instruct-v0....
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000002
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 4,096 tokens
- Moderated: No
Supported Parameters
Microsoft: Phi-3 Mini 128K Instruct
Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, i...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000001
- Completion: $0.0000001
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
Microsoft: Phi-3 Medium 128K Instruct
Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference a...
View Details
Architecture
- Modality: text->text
- Tokenizer: Other
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000001
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4o
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while bei...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.0000025
- Completion: $0.00001
- Image: $0.003613
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 16,384 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4o (extended)
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while bei...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000006
- Completion: $0.000018
- Image: $0.007225
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 64,000 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4o (2024-05-13)
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while bei...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image, file
- Output: text
Pricing (per token)
- Prompt: $0.000005
- Completion: $0.000015
- Image: $0.007225
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Meta: LlamaGuard 2 8B
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000002
Capabilities
- Context Length: 8,192 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3 8B Instruct
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong performa...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000003
- Completion: $0.00000006
Capabilities
- Context Length: 8,192 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Meta: Llama 3 70B Instruct
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong perform...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama3
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000003
- Completion: $0.0000004
Capabilities
- Context Length: 8,192 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Mistral: Mixtral 8x22B Instruct
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its s...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000006
Capabilities
- Context Length: 65,536 tokens
- Moderated: No
Supported Parameters
WizardLM-2 8x22B
WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000048
- Completion: $0.00000048
Capabilities
- Context Length: 65,536 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4 Turbo
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to December 2023.
View Details
Architecture
- Modality: text+image->text
- Tokenizer: GPT
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00001
- Completion: $0.00003
- Image: $0.01445
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Anthropic: Claude 3 Haiku
Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results [here](https:/...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.00000025
- Completion: $0.00000125
- Image: $0.0004
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Anthropic: Claude 3 Opus
Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding. See the launch announcement and benchmark results...
View Details
Architecture
- Modality: text+image->text
- Tokenizer: Claude
- Input: text, image
- Output: text
Pricing (per token)
- Prompt: $0.000015
- Completion: $0.000075
- Image: $0.024
Capabilities
- Context Length: 200,000 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Mistral Large
This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch ann...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000002
- Completion: $0.000006
Capabilities
- Context Length: 128,000 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4 Turbo Preview
The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023. **Note:** heavily rate limited by Ope...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00001
- Completion: $0.00003
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-3.5 Turbo (older v0613)
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.000002
Capabilities
- Context Length: 4,095 tokens
- Max Completion: 4,096 tokens
- Moderated: No
Supported Parameters
Mistral Small
With 22 billion parameters, Mistral Small v24.09 offers a convenient mid-point between (Mistral NeMo 12B)[/mistralai/mistral-nemo] and (Mistral Large 2)[/mistralai/mistral-large], providing a cost-eff...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000006
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Mistral Tiny
Note: This model is being deprecated. Recommended replacement is the newer [Ministral 8B](/mistral/ministral-8b) This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-t...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000025
- Completion: $0.00000025
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Mistral: Mistral 7B Instruct v0.2
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), wi...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000002
- Completion: $0.0000002
Capabilities
- Context Length: 32,768 tokens
- Moderated: No
Supported Parameters
Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parame...
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000054
- Completion: $0.00000054
Capabilities
- Context Length: 32,768 tokens
- Max Completion: 16,384 tokens
- Moderated: No
Supported Parameters
Noromaid 20B
A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge. #merge #uncensored
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama2
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001
- Completion: $0.00000175
Capabilities
- Context Length: 4,096 tokens
- Moderated: No
Supported Parameters
Goliath 120B
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale. Credits to - [@chargoddard](https://huggingface.co/chargoddard) for developing the fra...
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama2
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000006
- Completion: $0.000008
Capabilities
- Context Length: 6,144 tokens
- Max Completion: 1,024 tokens
- Moderated: No
Supported Parameters
Auto Router
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit [Activity](/activity), or ...
View Details
Architecture
- Modality: text->text
- Tokenizer: Router
- Input: text
- Output: text
Pricing (per token)
- Prompt: $-1
- Completion: $-1
Capabilities
- Context Length: 2,000,000 tokens
- Moderated: No
OpenAI: GPT-4 Turbo (older v1106)
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to April 2023.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00001
- Completion: $0.00003
Capabilities
- Context Length: 128,000 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-3.5 Turbo Instruct
This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000015
- Completion: $0.000002
Capabilities
- Context Length: 4,095 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Mistral: Mistral 7B Instruct v0.1
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
View Details
Architecture
- Modality: text->text
- Tokenizer: Mistral
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000011
- Completion: $0.00000019
Capabilities
- Context Length: 2,824 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-3.5 Turbo 16k
This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up to Sep 2021.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000003
- Completion: $0.000004
Capabilities
- Context Length: 16,385 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Mancer: Weaver (alpha)
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama2
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.000001125
- Completion: $0.000001125
Capabilities
- Context Length: 8,000 tokens
- Max Completion: 2,000 tokens
- Moderated: No
Supported Parameters
ReMM SLERP 13B
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama2
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000045
- Completion: $0.00000065
Capabilities
- Context Length: 6,144 tokens
- Moderated: No
Supported Parameters
MythoMax 13B
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge
View Details
Architecture
- Modality: text->text
- Tokenizer: Llama2
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00000006
- Completion: $0.00000006
Capabilities
- Context Length: 4,096 tokens
- Moderated: No
Supported Parameters
OpenAI: GPT-4 (older v0314)
GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00003
- Completion: $0.00006
Capabilities
- Context Length: 8,191 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-3.5 Turbo
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.0000005
- Completion: $0.0000015
Capabilities
- Context Length: 16,385 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
OpenAI: GPT-4
OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and adva...
View Details
Architecture
- Modality: text->text
- Tokenizer: GPT
- Input: text
- Output: text
Pricing (per token)
- Prompt: $0.00003
- Completion: $0.00006
Capabilities
- Context Length: 8,191 tokens
- Max Completion: 4,096 tokens
- Moderated: Yes
Supported Parameters
Key Capabilities
Section titled “Key Capabilities”- Vision - Image processing and understanding
- Tools - Function calling and external tool integration
- Free - Zero-cost models available for development and production
Filter models above by provider, modality, context length, and capabilities.