How Many Tokens?

← All models

GPT-4 Turbo: token counter & pricing

OpenAI · exact (uses official tokenizer) · pricing as of 2026-04-26.

Provider
OpenAI
API model ID
gpt-4-turbo-2024-04-09
Context window
128,000 tokens
Input price
$10.00 per 1M tokens
Output price
$30.00 per 1M tokens
Tokenizer accuracy
exact (uses official tokenizer)
Pricing as of
2026-04-26

Open the counter to count tokens for GPT-4 Turbo in real time.

What is GPT-4 Turbo?

GPT-4 Turbo (gpt-4-turbo-2024-04-09) is the predecessor to GPT-4o. Same 128k context window, same general capability tier, but uses the older cl100k_base tokenizer and is priced significantly higher than GPT-4o on both input and output.

How tokens are counted here

GPT-4 Turbo uses OpenAI's cl100k_base tokenizer (the same one as GPT-3.5 and the original GPT-4). We compute counts in your browser via js-tiktoken. Counts are exact.

The cl100k_base tokenizer produces 5-10% more tokens than o200k_base (the GPT-4o tokenizer) for the same English text. So a prompt that costs $X on GPT-4 Turbo will cost less than $X × (4o input rate / Turbo input rate) on GPT-4o once tokenizer differences are accounted for.

Should you still use GPT-4 Turbo?

Almost certainly not. GPT-4o is:

The main reason to keep GPT-4 Turbo in your stack is API stability for production workloads pinned to a specific model version. If your prompts and evals are tuned for Turbo's exact behavior, switching costs may not be worth the savings — but for new work, default to GPT-4o.

Pricing notes

At $10 input / $30 output per million, GPT-4 Turbo is among the most expensive non-frontier models still in production. Frontier models that cost more: Claude Opus 4.7 ($15/$75). That's it.

Common questions

Why is GPT-4 Turbo more expensive than newer GPT-4o despite being older?

OpenAI's GPT-4o launch substantially undercut Turbo's pricing. Turbo's price didn't drop in response — it's left high to nudge migration. Treat this as OpenAI signaling: move to GPT-4o.

Will GPT-4 Turbo be deprecated?

OpenAI hasn't published a deprecation date as of this writing. Historical pattern: predecessor models stay available 12-24 months past a successor's launch, then enter a deprecation window with retirement notice.

Does GPT-4 Turbo support vision and JSON mode?

Yes to both. Functionally similar to GPT-4o; just older and more expensive.

Compare GPT-4 Turbo to other models