Estimate request cost across text, images, and reasoning — with Standard or Batch pricing.
Text mode
Tokens (Advanced)
🌙☀️
Use System promptOptional • collapse to keep focus
Max 30,000 characters0 / 30,000
Max 50,000 characters0 / 50,000
Use ImagesVision input • 512×512 ≈ 85 tokens/tile
Select up to 10 files • JPEG/PNG/WebP • ≤ 5 MB each • ≤ 4096×4096 px
Remote images must allow CORS to read dimensions • JPEG/PNG/WebP • ≤ 4096×4096 px
Total image tokens:0
(auto-filled into pricing)
Use ResponsePaste assistant output to estimate completion tokens
Max 50,000 characters0 / 50,000
Standard
Batch
Text mode: counts tokens from your text; images and reasoning are optional.
Billed at output rate
Advanced mode disables text & images sections and uses only the numbers provided above.
Press "Calculate cost".
Prompt Cost Calculator
Short instruction: Pick your model, count input/output (and reasoning/image) tokens, choose Standard or Batch, toggle cached input if relevant — then calculate the total instantly below.
How to Calculate OpenAI Request CostFull step-by-step guide
Enter your prompts.
Add your System prompt (optional) and main Prompt text.
If you already have a model response, paste it into the Response field to estimate output tokens.
Add images (if applicable).
Upload up to 10 images (JPEG/PNG/WebP) or add image URLs — the calculator will estimate the image tokens automatically based on resolution and detail level.
Select model and pricing mode.
Choose the exact OpenAI model from the list.
Pick between Standard and Batch pricing, and enable "Use cached input" if your prompt is reused across requests.
Include reasoning tokens (optional).
For reasoning-capable models (like GPT-5), enter the number of reasoning tokens if you know it — they’re billed at the output rate.
Adjust number of requests.
Set Requests (N) to estimate total cost for multiple runs of the same prompt.
Click "Calculate cost".
The calculator will analyze all inputs and display total cost, token counts, and per-category breakdown for input, output, reasoning, and image tokens.
View or copy results.
Use "Show raw JSON" for a structured breakdown, or "Copy JSON" to export the result for API usage or documentation.
💡 FAQ — OpenAI Request Cost
Common questions about token calculation, batch pricing, cached input, image costs, and reasoning tokens.
How do I calculate the cost of an OpenAI API request?
You calculate the cost by multiplying input and output tokens by the current prices for your selected model, for example GPT-4o.
Can I estimate the OpenAI request cost before sending the request?
Yes — you can approximate token usage in your prompt and apply the model’s pricing, or simply use this calculator to get a near-accurate cost estimate.
How much does an image input cost in OpenAI?
Image inputs (vision / multimodal) are priced differently from plain text. The final cost depends on the model (e.g. GPT-4o, o3-mini) and image resolution.
What is OpenAI batch pricing?
Batch API allows processing many requests at once at a lower rate than standard pricing — ideal for bulk or offline operations.
What is cached input (prompt caching)?
Cached input lets you reuse the same prompt across multiple requests. Only the new or changed part of the prompt is billed at the regular rate, giving a discount for repeated sections.
How are reasoning tokens priced?
Reasoning-capable models (such as o1, o1-mini, o3-pro, o4-mini, or GPT-5) may charge separately for reasoning steps, which are billed at the output token rate.