AI-Powered Subtitle Translation

GPT Subtitler

AI-Powered Subtitling: Fast, Accurate, Effortless.
Turn your videos into global content with precise,
context-aware translations

Context-Aware Translation Excellence

Leveraging cutting-edge large language models to understand the meaning behind words, delivering translations that capture nuance, idioms, and cultural references.

Translation Comparison

Context: Spanish idiom for being distracted or daydreaming

Original Text
(Spanish)
Estar en las nubes no me ayuda a pasar el examen.

Being distracted or daydreaming won't help me pass the exam

AI Translation
(Spanish to English)
Being daydreaming won't help me pass the exam.
Machine Translation
(Spanish to English)
Being in the clouds doesn't help me pass the exam.

Powerful Translation Features

Experience state-of-the-art subtitle translation with our comprehensive feature set.

Audio Transcription

Convert audio and video files directly to accurate subtitles using Whisper AI, with support for multiple languages.

100+ Languages

Translate your subtitles to and from over 100 languages with high accuracy.

Advanced Settings

Fine-tune your translations with customizable settings and preferences for different content types and audiences.

Batch Translation

Process multiple subtitle files simultaneously with consistent terminology and style across your entire content library.

Supported LLM Providers

Choose your preferred LLM provider to translate your subtitles.

OpenAI

Claude

Gemini

DeepSeek logo

DeepSeek

Grok

OpenAI Compatible

How to Use GPT Subtitler

Complete your subtitle translations in just four simple steps.

1

Upload Subtitle File

Upload your subtitle file in any supported format to begin the translation process.

2

Select Translation Options

Choose your target language and adjust translation settings as needed.

3

Process Translation

Our AI models will translate your subtitles while preserving timing and formatting.

4

Download Translated Subtitles

Download your translated subtitles in your preferred format.

Frequently Asked Questions

Find answers to common questions about GPT Subtitler.

  1. Sign in to your account or create a new account if you don't have one.
  2. Click on "Translate a new subtitle" to create a new translation project.
  3. Upload your subtitle file in the SRT format or paste the subtitle content directly into the editor.
  4. Click on "Create" to start the translation process.
  5. If you have an OpenAI/Claude/Gemini API key, you can enter it in the settings. Otherwise, you need to use your account token for translations.
  6. Configure the translation settings, such as source and target languages, start index, prompt, batch size and model.
  7. Click on "Translate" to begin the translation.
  8. The translation process will start, and you can monitor the progress in real-time. The translated subtitles will be displayed in the "Translation Messages" section.
  9. You can stop the translation process at any time and resume it later using the "start index" option.
  10. Once the translation is complete, you can download the translated subtitles in your desired format.

GPT Subtitler supports the SRT (SubRip Text) file format for subtitle uploads. If your subtitles are in a different format, you can convert your subtitles file at subtitletools .

GPT Subtitler offers different pricing plans to suit your needs. We have a free subscription option, where you can receive a certain amount of Tokens daily, which can be used to translate a ~30 minutes video using the GPT-4o-mini model. For higher usage and advanced features, we offer Basic and Premium subscription plans. Please visit our pricing page for more details on the available plans and their pricing.

Insufficient Tokens

Error: Insufficient tokens. Please purchase more tokens or upgrade your subscription.

  • Solution: Purchase more tokens, reduce batch size or use a cheaper model like gpt-4o-mini
  • Prevention: Enable token estimation before translation to check costs
Usage Limit Exceeded

Error: Daily subscription usage limit exceede. Please try again later or use your account Token to translate.

  • Solution: Wait for daily subscription usage to reset, or use account tokens instead
  • Prevention: Monitor your usage in settings page
No API Key or Subscription

Error: Subscription tier is free, user has chosen not to use tokens, and no API key provided. Please upgrade your subscription or choose to use tokens.

  • Solution: Provide API key, use tokens, or upgrade subscription
  • Prevention: Check translation settings before starting

Maximum Tokens Reached

Error: Maximum tokens reached. Unable to parse the response Please reduce the batch size in the advanced settings and try again.

  • Solution: Reduce batch size in translation settings or disable Reflection or reasoning Mode
  • Prevention: Use models with larger Max Generation Length for long batches
Parse Error

SyntaxError: Unable to parse JSON response. Please retry using the "Fill Missing" feature. This may be due to the model outputting an incorrect format, which is an occasional error. If the problem persists, try reducing the batch size or using a different model

  • Solution: The model may have hallucinated, use "Fill Missing" feature to retry failed sections, or use a different model
  • Prevention: Try reducing batch size or switching to a different model, or enable and generate a few-shot example using the few-shot example generator
Invalid API Key

Please provide a valid API key based on the selected model.

  • Solution: Check your API key and ensure it matches the selected model
  • Prevention: Verify your API key before starting translation
Invalid Model Selected

Invalid model selected, please choose a valid model

  • Solution: Choose another available model from the dropdown list
  • Prevention: If you need to use this specific model, please contact support
API Key Daily Limit Reached

Error: API key daily usage limit reached. Please upgrade your subscription or try again tomorrow.

  • Solution: Upgrade your subscription to avoid this limit, or wait until tomorrow
  • Prevention: Consider upgrading your subscription to avoid daily limits
Unknown Error

An unknown error occurred while processing the current batch.

  • Solution: Please try again later or switch to different model
  • Prevention: N/A (server-side issue)

Concurrency Limit Exceeded

The selected concurrency limit is too high for your subscription tier. Please reduce the concurrency limit.

  • Solution: Reduce the concurrent batch limit in translation settings or upgrade your subscription tier for higher concurrency.
  • Prevention: Check the maximum concurrency limit for your tier before setting a higher value.
Model Not Compatible With Concurrency

The selected model is not compatible with concurrent translation. Please select a different model or use your own API key to translate, or disable concurrent translation.

  • Solution: Switch to a different model that supports concurrent processing or disable concurrent translation in the settings.
  • Prevention: For models with rate limitations, use sequential translation mode instead of concurrent.
Request Data Not Found

Concurrent translation request data not found.

  • Solution: Try refreshing the page and initiating the translation again.
  • Prevention: Ensure you have a stable internet connection when starting large translation jobs.
Concurrent Translation Failed

Failed to process concurrent translation.

  • Solution: Check for specific batch errors in the translation progress panel. You can try reducing the concurrency limit or switching to sequential mode.
  • Prevention: For complex translations or when experiencing errors, try reducing the concurrency limit.

Invalid OpenAI Request

Invalid OpenAI request. Please check the prompt and try again, it must contain the word 'json' or it might have exceed the maximum token limit of this model.

  • Solution: Check if prompt contains 'json' keyword, or reduce batch size
  • Prevention: Use default prompts or validated custom prompts
OpenAI Permission Denied

Your OpenAI API key does not have permission to use the specified resource.

  • Solution: Check API key permissions in OpenAI dashboard
  • Prevention: Ensure API key has correct access levels
Incorrect OpenAI API Key

Incorrect OpenAI API key provided, or you must be a member of an organization to use the API.

  • Solution: Verify API key or ensure organization membership
  • Prevention: Double-check API key format and validity
OpenAI Rate Limit Reached

OpenAI request rate limit reached or you have exceeded your current quota. Please try again later or check your plan and billing details.

  • Solution: Wait a few minutes and try again, or switch to different model
  • Prevention: Use paid API key or reduce concurrent translations
OpenAI Server Error

The OpenAI server encountered an error while processing your request. Please try again later.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)
OpenAI Services Overloaded

OpenAI's API is temporarily overloaded.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)

Invalid Claude Request

Invalid Claude request. Please check the prompt and try again. Note that Claude only supports temperature between 0 and 1.0

  • Solution: Check prompt format and translation settings, note that Claude only supports temperature between 0 and 1.0
  • Prevention: Use default prompts or validated custom prompts
Claude Permission Denied

Your Claude API key does not have permission to use the specified resource.

  • Solution: Verify API key permissions
  • Prevention: Ensure correct API key access levels
Incorrect Claude API Key

Incorrect Claude API key provided, or you must be a member of an organization to use the API.

  • Solution: Check and update API key
  • Prevention: Verify API key before starting
Claude Rate Limit Reached

Claude request rate limit reached or you have exceeded your current quota. Please try again later or check your plan and billing details.

  • Solution: Wait and retry later or switch to different model
  • Prevention: Space out translation requests
Claude Server Error

Anthropic's server encountered an error while processing your request. Please try again later.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)
Claude Services Overloaded

Anthropic's API is temporarily overloaded.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)

Invalid Gemini Request

The request body to Gemini is malformed. Your API key may not be valid or the request is missing required parameters.

  • Solution: Check API key and request parameters
  • Prevention: Verify settings before starting
Gemini Permission Denied

Your API key doesn't have the required Gemini permissions. Check that your API key is set and has the right access.

  • Solution: Check API key permissions
  • Prevention: Ensure correct API key setup
Gemini Resource Not Found

The requested Gemini resource wasn't found.

  • Solution: Verify model availability
  • Prevention: Use supported models
Gemini Rate Limit Reached

Gemini request rate limit reached. Please try again later, use other models, or consider using your own Gemini API key. You can apply for a free key from Google at https://aistudio.google.com/app/apikey.

  • Solution: Try again later or use different model
  • Prevention: For free Gemini models, there is a daily request limit shared by all users, if too many people use it, you will be more likely to encounter this error. For paid Gemini models, if they are experimental models, they are more likely to encounter this error because they are still unstable versions. It is recommended to use stable versions (without -exp in the model name), and if needed, you can get a free Gemini API key at https://aistudio.google.com/app/apikey
Gemini Server Error

An unexpected error occurred on Google's side. Wait a bit and retry your request or use other models.

  • Solution: Check your API key and wait and retry later, or switch to different model
  • Prevention: N/A (server-side issue)
Gemini Service Unavailable

The Gemini service may be temporarily overloaded or down. Wait a bit and retry your request or use other models.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)
Gemini Server Error

An error occurred while fetching data from the Gemini API. Please try again later.

  • Solution: Please try again later or switch to a different model
  • Prevention: N/A (server-side issue)
Gemini Blocked by Safety

The content of the file is blocked by Gemini due to [SAFETY] reasons. Please try again with a different models or modify the content of the file.

  • Solution: Please try again later or switch to a different model
  • Prevention: N/A (server-side issue)
Gemini Blocked by Other Reasons

The content of the file is blocked by Gemini for [OTHER] reasons. Please try again with a different models or modify the content of the file.

  • Solution: Please try again later or switch to a different model
  • Prevention: N/A (server-side issue)
Gemini Fetch Error

An error occurred while fetching data from the Gemini API.

  • Solution: Please try again later or switch to a different model
  • Prevention: N/A (server-side issue)

Invalid DeepSeek Request

Invalid DeepSeek request format. Please modify your request body according to the error hints.

  • Solution: Check request format
  • Prevention: Use default settings
DeepSeek Authentication Failed

DeepSeek authentication failed. Please check your API key.

  • Solution: Verify API key
  • Prevention: Check API key validity
Insufficient DeepSeek Balance

Insufficient DeepSeek account balance. Please check your account balance and top up.

  • Solution: Top up account
  • Prevention: Monitor account balance
DeepSeek Rate Limit Reached

DeepSeek rate limit reached. Please pace your requests reasonably or temporarily switch to other LLM service providers.

  • Solution: Wait and retry or switch to different model
  • Prevention: Try use a larger batch size

Invalid Mistral Request

Invalid request to Mistral API

  • Solution: Check request parameters
  • Prevention: Use default settings
Mistral Permission Denied

Permission denied for Mistral API

  • Solution: Check API key permissions
  • Prevention: Verify API key setup
Mistral Rate Limit Reached

Mistral API rate limit reached

  • Solution: Wait and retry or switch to different model
  • Prevention: Try use a larger batch size

Invalid XAI Request

Invalid xAI request. Please check the prompt and try again.

  • Solution: Check your API key format and validity
  • Prevention: Verify your API key before starting translation
XAI Permission Denied

Your xAI API key does not have permission to use the specified resource.

  • Solution: Verify API key permissions
  • Prevention: Check API key access
XAI Rate Limit Reached

xAI request rate limit reached. Please try again later.

  • Solution: Wait and retry or switch to different model
  • Prevention: Try use a larger batch size

Invalid Qwen Request

Invalid Qwen request. Please check the prompt and try again.

  • Solution: Check prompt format and translation settings
  • Prevention: Use default prompts or validated custom prompts
Qwen Permission Denied

Your Qwen API key does not have permission to use the specified resource.

  • Solution: Verify API key permissions
  • Prevention: Ensure correct API key access levels
Qwen Rate Limit Reached

Qwen request rate limit reached. Please try again later.

  • Solution: Wait and retry later or switch to different model
  • Prevention: Space out translation requests
Qwen Server Error

The Qwen server encountered an error while processing your request.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)
Qwen Services Overloaded

Qwen's API is temporarily overloaded.

  • Solution: Wait and retry later or switch to different model
  • Prevention: N/A (server-side issue)

Missing Third-Party Parameters

Error: Missing parameters for third-party model, please provide the API key, base url and the model name.

  • Solution: Provide API key, base URL, and model name
  • Prevention: Check all required fields are filled
Third-Party Authentication Failed

Third-party API authentication failed. Please check your API key.

  • Solution: Check API key
  • Prevention: Verify API key validity

Timeout Warning

Warning: Translation request has exceeded the maximum time limit. Please try again later or enable the Auto-resume feature.

  • Solution: Enable "Auto-resume" feature or use "Fill Missing"
  • Prevention: N/A (server-side issue)

Google Translate Error

An error occurred while using Google Translate. Please try again later.

  • Solution: Wait a few minutes and try again. If the problem persists, try using a different translation model.
  • Prevention: N/A (server-side issue)
Network Connection Error

Unable to connect to Google Translate. Please try again later.

  • Solution: Wait a few minutes and try again. If the problem persists, try using a different network connection.
  • Prevention: N/A (server-side issue)
Unsupported Language

The selected target language is not supported by Google Translate. Please choose a different language.

  • Solution: Choose a different target language that is supported by Google Translate.
  • Prevention: Check the supported languages list before starting translation.
Invalid Request Format

Invalid translation request format. Please try again.

  • Solution: Try refreshing the page and starting the translation again.
  • Prevention: Ensure the subtitle content is properly formatted before translation.
Missing Target Language

Target language not properly specified in the translation request.

  • Solution: Make sure you have selected a target language before starting translation.
  • Prevention: Always select a target language before starting translation.

Missing Source Language

Source language not properly specified in the translation request.

  • Solution: Make sure you have selected a source language before starting translation.
  • Prevention: Always select a source language before starting translation.
Unsupported Source Language

The source language is not supported by DeepL Translate. Please choose a different language.

  • Solution: Choose a different source language that is supported by DeepL Translate.
  • Prevention: Check the supported languages list before starting translation.
Unsupported Target Language

The target language is not supported by DeepL Translate. Please choose a different language.

  • Solution: Choose a different target language that is supported by DeepL Translate.
  • Prevention: Check the supported languages list before starting translation.
DeepL XML Parsing Error

DeepL XML parsing error. Please try again.

  • Solution: Try use a different model or try again later.
  • Prevention: N/A (server-side issue)
Invalid Request Format

Invalid translation request format. Please try again.

  • Solution: Try refreshing the page and starting the translation again.
  • Prevention: Ensure the subtitle content is properly formatted before translation.
Missing Target Language

Target language not properly specified in the translation request.

  • Solution: Make sure you have selected a target language before starting translation.
  • Prevention: Always select a target language before starting translation.
Missing API Key

DeepL API key is required. Please provide a valid API key in the settings.

  • Solution: Add your DeepL API key in the settings.
  • Prevention: Configure your DeepL API key before using DeepL translation.
Authentication Error

DeepL API key is invalid or expired

  • Solution: Check and update your DeepL API key in the settings.
  • Prevention: Ensure your API key is valid and not expired.
Quota Exceeded

DeepL translation quota exceeded for this billing period, please try using another model or use your own API key.

  • Solution: Wait until the next billing period or upgrade your DeepL plan. You can also try using another model or use your own API key.
  • Prevention: N/A (server-side issue)
Rate Limit Exceeded

Too many requests to DeepL API. Please try again later

  • Solution: Wait a few minutes and try again.
  • Prevention: Space out your translation requests.
Connection Error

Unable to connect to DeepL service

  • Solution: Check your internet connection and try again.
  • Prevention: Ensure stable internet connection before translating.
Temporary Connection Issue

Temporary connection issue with DeepL. Please try again

  • Solution: Wait a moment and try again.
  • Prevention: N/A (server-side issue)
Glossary Not Found

Specified DeepL glossary not found

  • Solution: Check if the glossary exists or create a new one.
  • Prevention: Verify glossary availability before use.
General Error

DeepL translation error occurred

  • Solution: Try again or switch to a different translation service.
  • Prevention: N/A (server-side issue)
Unknown Error

Unknown error occurred while using DeepL service

  • Solution: Try again or contact support if the issue persists.
  • Prevention: N/A (unexpected error)

You can use OpenAI's Whisper to transcribe the video, and then use the transcribed subtitle file as the input for the translation. If you just want to transcribe the video file, you can download the tool from this GitHub repository vibe. If you want to download a Youtube video, transcribe it using Whisper and translate it using GPT in one go, I recommend using this tool I've open-sourced on my GitHub repository GPT_subtitles. This repo is the inspiration for this project, and if you find it useful, please consider leaving a star on the repo ⭐. Thank you!

The token cost for translations is calculated based on the number of input and output tokens required to process the subtitle content. OpenAI charges different prices for input and output tokens, and the prices vary depending on the model used.

In GPT Subtitler, we use the token consumption of the $0.5/1M Tokens as baseline. Then, we calculate the token consumption based on the actual token usage and the model used, proportionally.

OpenAI Pricing
Model Input Token Price (per 1M tokens) Output Token Price (per 1M tokens) Input Token Multiplier Output Token Multiplier Context Window Max Generation Output Speed LMSYS Score
gpt-5$1.25$10.002.5x20x400k128000Medium1481
gpt-5-chat-latest$1.25$10.002.5x20x400k128000Fast1481
gemini-2.5-pro-preview-05-06$1.25$10.002.5x20x1M8192Slow1460
gemini-2.5-pro-preview-06-05$1.25$10.002.5x20x1M8192Medium1460
gemini-2.5-pro-preview-06-05-thinking$1.25$10.002.5x20x1M8192Slow1460
gemini-2.5-pro$1.25$10.002.5x20x1M8192Medium1460
gemini-2.5-pro-free (Free)$0.01$0.010.02x0.02x1M8192Medium1460
gemini-2.5-pro-openai$1.25$10.002.5x20x1M32768Moderately fast1460
gemini-2.5-pro-openai-free (Free)$0.01$0.010.02x0.02x1M8192Medium1460
chatgpt-4o-latest$5.00$15.0010x30x128k16384Moderately fast1442
grok-4-latest$3.00$15.006x30x256k16384Moderately fast1429
qwen3-235b-a22b-instruct-2507$0.29$1.140.572x2.286x128k8192Moderately fast1428
openrouter/deepseek/deepseek-r1-0528-free (Free)$0.01$0.010.02x0.02x163k8192Slow1424
openrouter/deepseek/deepseek-r1-0528$0.50$2.181x4.36x163k8192Slow1424
deepseek-reasoner$0.55$2.191.1x4.38x128k16384Slow1417
grok-3-latest$3.00$15.006x30x131k8192Moderately fast1409
gemini-2.5-flash-preview-05-20$0.15$0.600.3x1.2x1M8192Fast1408
gemini-2.5-flash-preview-05-20-thinking$0.15$3.500.3x7x1M8192Moderately fast1408
gemini-2.5-flash-preview-05-20-free (Free)$0.01$0.010.02x0.02x1M8192Fast1408
gemini-2.5-flash-preview-05-20-thinking-free (Free)$0.01$0.010.02x0.02x1M8192Medium1408
gemini-2.5-flash$0.30$2.500.6x5x1M8192Fast1408
gemini-2.5-flash-thinking$0.30$2.500.6x5x1M8192Moderately fast1408
gemini-2.5-flash-free (Free)$0.01$0.010.02x0.02x1M8192Fast1408
gemini-2.5-flash-thinking-free (Free)$0.01$0.010.02x0.02x1M8192Medium1408
gpt-4.1$2.00$8.004x16x1M32768Moderately fast1407
qwen3-235b-a22b-thinking-2507$0.29$2.860.572x5.714x128k8192Medium1401
claude-sonnet-4-20250514-thinking$3.00$15.006x30x200k8192Moderately fast1398
o4-mini$1.10$4.402.2x8.8x200k100000Moderately fast1397
openrouter/deepseek/deepseek-r1$0.75$2.401.5x4.8x64k8192Slow1394
openrouter/deepseek/deepseek-r1-free (Free)$0.01$0.010.02x0.02x128k128000Slow1394
siliconflow/deepseek-ai/DeepSeek-R1$0.57$2.291.14x4.57x64k8192Slow1394
gemini-2.5-flash-preview-04-17$0.15$0.600.3x1.2x1M8192Fast1392
gemini-2.5-flash-preview-04-17-thinking$0.15$3.500.3x7x1M8192Moderately fast1392
gemini-2.5-flash-preview-04-17-free (Free)$0.01$0.010.02x0.02x1M8192Fast1392
gemini-2.5-flash-preview-04-17-thinking-free (Free)$0.01$0.010.02x0.02x1M8192Medium1392
claude-3-7-sonnet-20250219-thinking$3.00$15.006x30x200k8192Moderately fast1385
claude-sonnet-4-20250514$3.00$15.006x30x200k8192Moderately fast1385
qwen3-30b-a3b-instruct-2507$0.11$0.430.214x0.858x128k8192Fast1379
qwen3-30b-a3b-thinking-2507$0.11$1.070.214x2.142x128k8192Moderately fast1379
gemini-2.5-flash-lite-preview-06-17$0.10$0.400.2x0.8x1M8192Fastest1377
gemini-2.5-flash-lite-preview-06-17-thinking$0.10$0.400.2x0.8x1M8192Fast1377
gemini-2.5-flash-lite-preview-06-17-free (Free)$0.01$0.010.02x0.02x1M8192Fastest1377
gemini-2.5-flash-lite-preview-06-17-thinking-free (Free)$0.01$0.010.02x0.02x1M8192Fast1377
openrouter/qwen/qwen3-30b-a3b-free (Free)$0.01$0.010.02x0.02x41k8192Fast1377
gpt-4.1-mini$0.40$1.600.8x3.2x1M32768Fast1372
qwen-max-latest$2.86$8.575.714x17.142x32k8192Medium1372
openrouter/qwen/qwen3-235b-a22b-free (Free)$0.01$0.010.02x0.02x41k8192Moderately fast1372
claude-3-7-sonnet-20250219$3.00$15.006x30x200k8192Moderately fast1369
claude-3-5-sonnet-20241022$3.00$15.006x30x200k8192Moderately fast1366
gemini-2.0-flash$0.10$0.400.2x0.8x1M8192Fast1364
openrouter/google/gemma-3-27b-it-free (Free)$0.01$0.010.02x0.02x128k8192Fast1363
grok-3-mini-latest$0.30$0.500.6x1x131k8192Fast1359
gemini-2.0-flash-exp-free (Free)$0.01$0.010.02x0.02x1M8192Fast1358
deepseek-chat$0.27$1.100.54x2.2x128k8192Moderately fast1356
gemini-2.0-flash-lite-preview$0.07$0.300.15x0.6x1M8192Fastest1351
gemini-2.0-flash-thinking-exp-01-21-free (Free)$0.01$0.010.02x0.02x1M32768Fast1351
gemini-2.0-flash-lite-preview-02-05-free (Free)$0.01$0.010.02x0.02x1M8192Fastest1351
gemini-1.5-pro-latest$1.25$5.002.5x10x2M8192Medium1350
gemini-1.5-pro-latest-free (Free)$0.01$0.010.02x0.02x2M8192Slow1350
mistral-small-latest (Free)$0.01$0.010.02x0.02x128k8192Medium1348
openrouter/qwen/qwen3-32b-free (Free)$0.01$0.010.02x0.02x41k8192Fast1346
o3-mini$1.10$4.402.2x8.8x200k100000Moderately fast1344
qwen-plus-latest$0.11$0.290.228x0.572x131k8192Fast1344
gpt-4o$2.50$10.005x20x128k16384Moderately fast1343
openrouter/qwen/qwq-32b-free (Free)$0.01$0.010.02x0.02x128k8192Fast1336
gpt-4o-2024-11-20$2.50$10.005x20x128k16384Moderately fast1318
gemini-1.5-flash-latest$0.07$0.300.15x0.6x1M8192Fast1280
gemini-1.5-flash-latest-free (Free)$0.01$0.010.02x0.02x1M8192Moderately fast1280
gpt-4.1-nano$0.10$0.400.2x0.8x1M32768Fastest1279
gpt-4o-mini$0.15$0.600.3x1.2x128k16384Fast1269
mistral-large-latest (Free)$0.01$0.010.02x0.02x128k8192Medium1260
claude-3-5-haiku-20241022$1.00$5.002x10x200k8192Fastest1246
gpt-5-mini$0.25$2.000.5x4x400k128000Fast0
gpt-5-nano$0.05$0.400.1x0.8x400k128000Fastest0
open-mistral-nemo (Free)$0.01$0.010.02x0.02x128k8192Fast0
pixtral-12b-2409 (Free)$0.01$0.010.02x0.02x128k8192Fast0
third-party-model$0.00$0.001x1x128k8192Medium0
qwen-turbo-latest$0.04$0.090.086x0.172x1M8192Fastest0
qwen-flash$0.02$0.210.042x0.428x1M8192Fastest0
google-translate$0.00$0.000.01x0.01x100K4096Fastest0
deepl-translate$2.50$2.505x5x100K4096Fastest0

Please note that while the LMSYS Score represents the general capability of the model, the actual performance for translation tasks may vary for some language pairs. But generally, Gemini models are the best choice for non-English translation.


  • The Context Window represents the maximum number of tokens the model can process in a single request.
  • Max Generation is the maximum number of tokens the model can generate in a single response.
  • Output Speed is a relative measure of how quickly the model generates responses.
  • The LMSYS Score is a benchmark score from lmsys.org, indicating the model's general performance across various tasks. (As of 2025-02-06)

The total token cost for a translation request is calculated by multiplying the number of input tokens by the input token price and multiplier, and the number of output tokens by the output token price and multiplier. The sum of these costs gives the final token price for the translation.

You can view the estimated token usage and price for each translation request in the "Translation Cost" section when creating a new translation project. If you have a paid subscription plan, you may have a daily token allowance that you can use for translations without incurring additional costs.


Note: Basic plan users get a 5% discount on token usage, and Premium plan users get a 10% discount on token usage.

Example 1: Using Claude-Haiku model with 1000 input tokens and 2000 output tokens:

Input token = 1000 * 0.5 = 500
Output token = 2000 * 2.5 = 5000
Total token = 500 + 5000 = 5500

Example 2: Using GPT-3.5-TURBO model with 1000 input tokens and 2000 output tokens:

Input token = 1000 * 1.0 = 1000
Output token = 2000 * 3.0 = 6000
Total token = 1000 + 6000 = 7000

Example 3: Premium plan users using Claude-Haiku model with 1000 input tokens and 2000 output tokens:

Input token = 1000 * 0.5 = 500
Output token = 2000 * 2.5 = 5000
Total token = (500 + 5000)*0.9 = 4950

Cache Pricing Information

We use Anthropic's cache service for Claude models and cache the system prompt for each translation request, this can significantly reduce the token usage and cost for subsequent translations if you use the same model and settings and have a long additional context and system prompt. The following table shows the pricing information for cache operations:

For more information about prompt caching, see Anthropic's blog post on prompt caching

Model Cache Writes Token Price Cache Hits Token Price Cache Writes Token Multiplier Cache Hits Token Multiplier
claude-3-5-sonnet-20241022$3.75$0.307.5x0.6x
claude-3-5-haiku-20241022$1.25$0.102.5x0.2x
claude-3-7-sonnet-20250219$3.75$0.307.5x0.6x
claude-3-7-sonnet-20250219-thinking$3.75$0.307.5x0.6x
claude-sonnet-4-20250514$3.75$0.307.5x0.6x
claude-sonnet-4-20250514-thinking$3.75$0.307.5x0.6x

Our platform offers several transcription models with different cost structures and performance characteristics. The table below compares these models based on token cost per minute, processing speed, word error rate, and language support.

Model Tokens/Minute Speed Word Error Rate Languages
AssemblyAI Slam-1 (English Only)12,000Medium8.6%English only
AssemblyAI Universal12,000Medium8.7%102 languages
Whisper Large v38,000Medium10.3%99 languages
Wizper4,000Fast10.3%99 languages
Mistral Voxtral Mini2,000Fast11.2%Multilingual
AssemblyAI Nano5,000Fast12.7%102 languages

Important Notes on Transcription Models

  • Token cost is calculated based on the duration of the audio file, with different models having different token rates per minute.
  • Word Error Rate (WER) is a measure of transcription accuracy - lower percentages indicate better performance.
  • Processing speed indicates how quickly the model can transcribe audio relative to its duration.
  • Number of languages supported by each model. Some models are optimized for specific languages.
  • AssemblyAI models are billed per second with rounding to the nearest second.
  • FAL AI models (Whisper, Wizper) are billed by exact duration with no rounding.

For general use, we recommend Whisper Large v3 for the best balance of accuracy and cost. For English-only content requiring highest accuracy, AssemblyAI Slam-1 is recommended. For cost-sensitive applications, AssemblyAI Nano offers good performance at a lower price point.

Yes, you can choose to use your own OpenAI/Claude/Gemini API key for translation. If you provide an API key, the server will use it for translation, and your account Token balance and usage will not be affected. If you use a paid subscription plan, you can also use the translation service without providing an API key and without using Token balance, until your Token quota is exhausted.

To get an OpenAI API key, you need to register an account on the OpenAI website at openai signup and openai api-keys . Once you have the API key, you can use it in the translation settings to enable the use of the OpenAI API for translations. This website will not store your API key or use it in any other way than for the translation process. It is recommended to revoke your API key once in a while to ensure the security of your account.

To get a free Gemini API key, visit https://aistudio.google.com/app/apikey. Follow the instructions to generate your API key. Once you have the API key, you can use it in the translation settings to enable the use of the Gemini API for translations.

  1. Sign in to your account.
  2. Navigate to the "Settings" page.
  3. On the settings page, you will find information about your token balance, subscription tier, and usage details.
  4. You can also view your token usage history and transaction logs on the same page.

The Whisper transcription feature is a new addition to our service that allows you to automatically transcribe audio and video files. It uses OpenAI's Whisper model to generate accurate transcriptions in multiple languages. This feature is particularly useful for creating subtitles for videos, transcribing interviews, or converting speech to text for various purposes. You can access this feature in the 'Audio Transcriber' section of our website.

Try it now!

  1. Go to the 'Audio Transcriber' section of the website.Try it now!
  2. Upload your audio or video file, or provide a YouTube URL.
  3. Select the language of the audio content (if known).
  4. Optionally, add any specific instructions or context in the 'Prompt' field.
  5. Click the 'Start Transcription' button to start the process.
  6. Once completed, you can view, edit, and download the transcription, or create a new subtitle project with the result.

The batch translation feature allows you to translate multiple subtitle files in one project. Here are the key points:

File Limits:
  • Maximum 10 files per batch project
  • Free users: translate up to 2 files simultaneously
  • Basic/Premium users: translate up to 5 files simultaneously
Translation Process:
  • Create a new batch project
  • Upload your SRT files (drag & drop supported)
  • Select files to translate
  • Configure translation settings once for all files
  • Monitor progress and download completed translations
Tips:
  • Avoid using free model to translate too many files at once as it may cause rate limit issue
  • Preview files before translation
  • Download individual files or all completed translations at once
  • Stop and resume translations as needed

GPT Subtitler supports translation between all languages supported by the language models, including English, Chinese, Spanish, Japanese, Korean, and more. It can even translate some fictional languages, such as Elvish or Valyrian. When creating a new translation project, you can select the source and target languages from the provided dropdown menus. If your desired language is not listed, you can select "Other" and manually enter the language. Please note that the quality of translation may vary depending on the language pair and the model used. Currently, this project is optimized for English to Chinese translation, other language pairs may have different effects.

The backbone of the translation service of our website is available in GitHub GPT_subtitles , but in short, the translations are processed by sending the subtitle content in batches to the OpenAI/Claude/Gemini API for processing. The prompt is carefully crafted to guide the model to generate accurate translations. We also utilize additional context provided by the user and few-shot examples to improve the translation quality. The API generates the translated text based on the input content and the specified settings, such as source and target languages, prompt, and model. The translated text is then displayed in real-time on the website, allowing you to monitor the progress and download the translated subtitles once the process is complete.

Few-shot examples are a small set of example translations provided to the AI model to guide and improve the quality of the generated translations. By including a few relevant examples of how specific phrases or sentences should be translated from the source language to the target language, the model can better understand the desired translation style, tone, and context. This approach helps to produce more accurate and contextually appropriate translations, especially for domain-specific or idiomatic expressions. The minimal required format for few-shot examples is as follows:


[
  {
    "input": {
      "current_batch_subtitles": [
        {
          "index": 1,
          "original_text": "This is a sample subtitle."
        }
      ]
    },
    "output": {
      "current_batch_subtitles_translation": [
        {
          "index": 1,
          "original_text": "This is a sample subtitle.",
          "first_translation": "这是一个示例字幕。",
          "translation": "这是一个示例字幕。"
        }
      ]
    }
  }
]

Optionally, you can include 'previous_batch_subtitles' and 'next_batch_subtitles' fields to provide additional context for the translations. These fields should be placed inside the 'input' object, alongside the 'current_batch_subtitles' field. The structure for these optional fields is as follows:


{
  "previous_batch_subtitles": [
    {
      "index": 1,
      "original_text": "Previous subtitle 1",
      "first_translation": "之前的字幕1",
      "translation": "之前的字幕1"
    }
  ],
  "next_batch_subtitles": [
    {
      "index": 3,
      "original_text": "Next subtitle 1"
    }
  ]
}

Prompt for generating few-shot examples. After generating the examples, copy the final JSON array into the few-shot examples section. This should improve the accuracy of the translation and reduce the error rate of the model:

Source Language: (fill in here)
Target Language:  (fill in here)
I need few-shot examples for translating subtitles from [Source Language] to [Target Language]. Please provide 5-7 examples that showcase common subtitle structures, idiomatic expressions, and cultural references. Each example should include the original text in [Source Language], its translation in [Target Language], and any relevant context or explanation. The examples should be formatted as follows:

{
  "input": {
    "current_batch_subtitles": [
      {
        "index": 1,
        "original_text": "[Source Language Text]"
      }
    ]
  },
  "output": {
    "current_batch_subtitles_translation": [
      {
        "index": 1,
        "original_text": "[Source Language Text]",
        "translation": "[Target Language Translation]"
      }
    ]
  }
}

Please ensure the examples cover a range of scenarios typical in subtitle translation.

After providing the individual examples, please combine all the examples into a single JSON array structure, like this:

[
  {
    "input": { ... },
    "output": { ... }
  },
  {
    "input": { ... },
    "output": { ... }
  },
  ...
]

This combined JSON structure should include all the examples you've created, making it easy to use as a complete set of few-shot examples for the translation model.

Based on experience, Gemini-pro generally offers the best performance for translation tasks. However, the optimal model can vary depending on the specific language pair. If Gemini-pro's performance doesn't meet your expectations, try the following models in this order:

1. Gemini-pro
2. Claude-3.5-sonnet
3. GPT-4o
4. GPT-4o-mini
5. Gemini-flash
6. Claude-haiku

It's recommended to test these models with a smaller batch of text to determine which works best for your specific use case. Note that DeepSeek models are exceptionally good and cost-effective for Chinese translation but may not perform as well for other languages.

Recommended settings:

1. Batch Size: Use a higher batch size to provide more context for the translation.
2. Enable Context Memory: This helps the model better understand the video context and produce more accurate translations, although it may slightly increase token usage.
3. Fluent Merging: Beneficial when using the Whisper model to generate subtitles. It helps handle cases where sentences are broken across multiple lines, creating more natural translations.
4. Few-shot examples: These can significantly improve translation accuracy by providing guidance to the model. As these are language-specific, you may need to create custom examples for your language pair.

A setting incorporating many of these features is available at: https://gptsubtitler.com/settings/8dd9a960-cb30-4141-8b23-d3944d97773e. You can access and modify it to fit your specific needs.

In the settings modal, click on the 'Share' button next to your setting. You can then review and edit your setting, after publish the setting, you can use "copy link" to get a shareable link.

Yes! You can access the translation settings by opening link shared by others, bookmark ones you like, and load them into your translation settings.

Go to your account settings. You'll find an option to edit your username next to your username. If you haven't set one, a random username will be assigned which you can then change.

Custom instructions in the Additional Context field give the AI model specific guidance on how to translate your subtitles. Here are some tips for writing effective instructions:

1. Be specific and clear: Instead of "make it sound good", try "translate using casual language appropriate for a teenage audience".

2. Focus on one aspect at a time: Separate formatting instructions ("preserve line breaks") from style instructions ("use formal language").

3. Provide examples when possible: If you want specific terms translated in a certain way, include examples.

4. Consider cultural context: If the content has cultural references, specify how to handle them ("explain Japanese cultural references briefly in parentheses").

5. Use the Instruction Library: Access pre-made instruction templates by clicking the "Instruction Library" button below the Additional Context field.

Remember that instructions should be in the target language or English, as they're meant to guide the AI, not to be translated themselves.

If you have any further questions, feedback, or need assistance, please feel free to contact me at support@gptsubtitler.com . I will be happy to help you with any queries or issues you may have.

I'm sorry that we didn't meet your expectations. We support refunds within 7 days after payment. Please contact me through the email you used when registering (support@gptsubtitler.com), and I will do my best to resolve your issue.

Ready to Transform Your Subtitles?

Join thousands of content creators who are already using GPT Subtitler to reach global audiences.