# How Many Tokens? > Free multi-provider AI token counter and cost estimator. Paste a prompt; get instant token counts and per-million-call cost across 15+ models from Anthropic, OpenAI, Google, Meta, Mistral, DeepSeek, and Alibaba. ## What this site does - Counts tokens for the same prompt across multiple AI providers in one view. - Estimates cost per call and per million calls based on user-adjustable input/output split. - Publishes a confidence label per row (exact / ≈±3%) so users know which counts come from official tokenizers and which are approximations. - Maintains a public pricing changelog with every observed price change. ## How tokenization works here - **OpenAI (GPT-4o, GPT-4o mini, GPT-4 Turbo)**: tokens computed in the browser using js-tiktoken (o200k or cl100k vocab depending on model). Exact. - **Anthropic (Claude Opus, Sonnet, Haiku)**: tokens computed by Anthropic's `/v1/messages/count_tokens` endpoint via our serverless proxy. Exact. - **Google (Gemini 2.5 Pro, Flash)**: tokens computed by Google's `models.countTokens` endpoint via our serverless proxy. Exact. - **Meta Llama, Mistral, DeepSeek, Qwen**: tokens computed in the browser using each model's published BPE/SentencePiece vocabulary. Approximation within ±3% of the reference tokenizer. ## Pages - `/` — main counter - `/methodology/` — full tokenizer + pricing methodology - `/pricing-changelog/` — every pricing change with date and source - `/[model-slug]/` — per-model deep-dive page (one per supported model) - `/[question-slug]/` — answer pages for common token-related questions - `/about/`, `/privacy/` ## Data - Pricing source of truth: `data/pricing.json` in the repo, mirrored to Cloudflare KV at deploy time. - Pricing is snapshot manually from each provider's official pricing page. Snapshot date is shown in the UI. ## Privacy - No cookies. No third-party tracking beyond Cloudflare Web Analytics (no PII, no fingerprinting). - Prompt content is never logged. - Browser-side counts (OpenAI, OSS) never leave the user's device. - Worker-side counts (Anthropic, Gemini) send the prompt only to the provider's tokenization endpoint, never persist it.