How Many Tokens?

← All models

GPT-5: token counter & pricing

OpenAI · exact (uses official tokenizer) · pricing as of 2026-04-27.

Provider
OpenAI
API model ID
gpt-5
Context window
400,000 tokens
Input price
$1.25 per 1M tokens
Output price
$10.00 per 1M tokens
Tokenizer accuracy
exact (uses official tokenizer)
Pricing as of
2026-04-27

Open the counter to count tokens for GPT-5 in real time.

What is GPT-5?

GPT-5 is the original model in OpenAI's GPT-5 line, launched in 2026 as the successor to the GPT-4 family. Mid-tier capability, 400K context window, priced as a workhorse for production AI.

GPT-5 has since been joined by GPT-5.1, 5.2, 5.3, 5.4, and 5.5 — incremental improvements over the same base. For new work, default to GPT-5.5 (current flagship) or GPT-5.4 (cheaper). GPT-5 itself is still in the API for stability and as a budget-aware alternative.

How tokens are counted here

GPT-5 uses OpenAI's o200k_base tokenizer (shared across the entire GPT-5 family and GPT-4o). Counts computed in your browser via js-tiktoken. Exact.

Pricing notes

$1.25 input / $10 output per 1M tokens. Cached input $0.125/M (10% standard).

This is the single-tier price — no long-context surcharge above 128K. That makes GPT-5 attractive for workloads that frequently hit higher context lengths than GPT-5.4 / GPT-5.5 (which charge ~2× above 128K input).

When to use GPT-5

When not to use it:

Common questions

Is GPT-5 the same as GPT-5.5?

No. GPT-5 is the original release; GPT-5.5 is the latest revision (April 2026). They share the same tokenizer and broadly similar capability tier, but GPT-5.5 has measurably better coding, tool use, and reasoning behavior.

What's the cheapest GPT-5 variant?

GPT-5 Nano at $0.05 input / $0.40 output per 1M tokens — currently the cheapest input rate of any exact-tokenizer model in this counter.

Does GPT-5 use the same tokenizer as GPT-4o?

Yes — both use o200k_base. Token counts match for the same input text across both models; only per-token price differs. (Older GPT-4 Turbo and GPT-3.5 use the predecessor cl100k_base and produce ~5-10% more tokens for the same text.)

Compare GPT-5 to other models