AI-Powered Subtitling: Fast, Accurate, Effortless.
Turn your videos into global content with precise,
context-aware translations
Leveraging cutting-edge large language models to understand the meaning behind words, delivering translations that capture nuance, idioms, and cultural references.
Context: Spanish idiom for being distracted or daydreaming
Being distracted or daydreaming won't help me pass the exam
Experience state-of-the-art subtitle translation with our comprehensive feature set.
Convert audio and video files directly to accurate subtitles using Whisper AI, with support for multiple languages.
Translate your subtitles to and from over 100 languages with high accuracy.
Fine-tune your translations with customizable settings and preferences for different content types and audiences.
Process multiple subtitle files simultaneously with consistent terminology and style across your entire content library.
Complete your subtitle translations in just four simple steps.
Upload your subtitle file in any supported format to begin the translation process.
Choose your target language and adjust translation settings as needed.
Our AI models will translate your subtitles while preserving timing and formatting.
Download your translated subtitles in your preferred format.
Find answers to common questions about GPT Subtitler.
GPT Subtitler supports the SRT (SubRip Text) file format for subtitle uploads. If your subtitles are in a different format, you can convert your subtitles file at subtitletools .
GPT Subtitler offers different pricing plans to suit your needs. We have a free subscription option, where you can receive a certain amount of Tokens daily, which can be used to translate a ~30 minutes video using the GPT-4o-mini model. For higher usage and advanced features, we offer Basic and Premium subscription plans. Please visit our pricing page for more details on the available plans and their pricing.
Error: Insufficient tokens. Please purchase more tokens or upgrade your subscription.
Error: Daily subscription usage limit exceede. Please try again later or use your account Token to translate.
Error: Subscription tier is free, user has chosen not to use tokens, and no API key provided. Please upgrade your subscription or choose to use tokens.
Error: Maximum tokens reached. Unable to parse the response Please reduce the batch size in the advanced settings and try again.
SyntaxError: Unable to parse JSON response. Please retry using the "Fill Missing" feature. This may be due to the model outputting an incorrect format, which is an occasional error. If the problem persists, try reducing the batch size or using a different model
Please provide a valid API key based on the selected model.
Invalid model selected, please choose a valid model
Error: API key daily usage limit reached. Please upgrade your subscription or try again tomorrow.
An unknown error occurred while processing the current batch.
The selected concurrency limit is too high for your subscription tier. Please reduce the concurrency limit.
The selected model is not compatible with concurrent translation. Please select a different model or use your own API key to translate, or disable concurrent translation.
Concurrent translation request data not found.
Failed to process concurrent translation.
Invalid OpenAI request. Please check the prompt and try again, it must contain the word 'json' or it might have exceed the maximum token limit of this model.
Your OpenAI API key does not have permission to use the specified resource.
Incorrect OpenAI API key provided, or you must be a member of an organization to use the API.
OpenAI request rate limit reached or you have exceeded your current quota. Please try again later or check your plan and billing details.
The OpenAI server encountered an error while processing your request. Please try again later.
OpenAI's API is temporarily overloaded.
Invalid Claude request. Please check the prompt and try again. Note that Claude only supports temperature between 0 and 1.0
Your Claude API key does not have permission to use the specified resource.
Incorrect Claude API key provided, or you must be a member of an organization to use the API.
Claude request rate limit reached or you have exceeded your current quota. Please try again later or check your plan and billing details.
Anthropic's server encountered an error while processing your request. Please try again later.
Anthropic's API is temporarily overloaded.
The request body to Gemini is malformed. Your API key may not be valid or the request is missing required parameters.
Your API key doesn't have the required Gemini permissions. Check that your API key is set and has the right access.
The requested Gemini resource wasn't found.
Gemini request rate limit reached. Please try again later, use other models, or consider using your own Gemini API key. You can apply for a free key from Google at https://aistudio.google.com/app/apikey.
An unexpected error occurred on Google's side. Wait a bit and retry your request or use other models.
The Gemini service may be temporarily overloaded or down. Wait a bit and retry your request or use other models.
An error occurred while fetching data from the Gemini API. Please try again later.
The content of the file is blocked by Gemini due to [SAFETY] reasons. Please try again with a different models or modify the content of the file.
The content of the file is blocked by Gemini for [OTHER] reasons. Please try again with a different models or modify the content of the file.
An error occurred while fetching data from the Gemini API.
Invalid DeepSeek request format. Please modify your request body according to the error hints.
DeepSeek authentication failed. Please check your API key.
Insufficient DeepSeek account balance. Please check your account balance and top up.
DeepSeek rate limit reached. Please pace your requests reasonably or temporarily switch to other LLM service providers.
Invalid request to Mistral API
Permission denied for Mistral API
Mistral API rate limit reached
Invalid xAI request. Please check the prompt and try again.
Your xAI API key does not have permission to use the specified resource.
xAI request rate limit reached. Please try again later.
Invalid Qwen request. Please check the prompt and try again.
Your Qwen API key does not have permission to use the specified resource.
Qwen request rate limit reached. Please try again later.
The Qwen server encountered an error while processing your request.
Qwen's API is temporarily overloaded.
Error: Missing parameters for third-party model, please provide the API key, base url and the model name.
Third-party API authentication failed. Please check your API key.
Warning: Translation request has exceeded the maximum time limit. Please try again later or enable the Auto-resume feature.
An error occurred while using Google Translate. Please try again later.
Unable to connect to Google Translate. Please try again later.
The selected target language is not supported by Google Translate. Please choose a different language.
Invalid translation request format. Please try again.
Target language not properly specified in the translation request.
Source language not properly specified in the translation request.
The source language is not supported by DeepL Translate. Please choose a different language.
The target language is not supported by DeepL Translate. Please choose a different language.
DeepL XML parsing error. Please try again.
Invalid translation request format. Please try again.
Target language not properly specified in the translation request.
DeepL API key is required. Please provide a valid API key in the settings.
DeepL API key is invalid or expired
DeepL translation quota exceeded for this billing period, please try using another model or use your own API key.
Too many requests to DeepL API. Please try again later
Unable to connect to DeepL service
Temporary connection issue with DeepL. Please try again
Specified DeepL glossary not found
DeepL translation error occurred
Unknown error occurred while using DeepL service
You can use OpenAI's Whisper to transcribe the video, and then use the transcribed subtitle file as the input for the translation. If you just want to transcribe the video file, you can download the tool from this GitHub repository vibe. If you want to download a Youtube video, transcribe it using Whisper and translate it using GPT in one go, I recommend using this tool I've open-sourced on my GitHub repository GPT_subtitles. This repo is the inspiration for this project, and if you find it useful, please consider leaving a star on the repo ⭐. Thank you!
The token cost for translations is calculated based on the number of input and output tokens required to process the subtitle content. OpenAI charges different prices for input and output tokens, and the prices vary depending on the model used.
In GPT Subtitler, we use the token consumption of the $0.5/1M Tokens as baseline. Then, we calculate the token consumption based on the actual token usage and the model used, proportionally.
OpenAI PricingModel | Input Token Price (per 1M tokens) | Output Token Price (per 1M tokens) | Input Token Multiplier | Output Token Multiplier | Context Window | Max Generation | Output Speed | LMSYS Score |
---|---|---|---|---|---|---|---|---|
gpt-5 | $1.25 | $10.00 | 2.5x | 20x | 400k | 128000 | Medium | 1481 |
gpt-5-chat-latest | $1.25 | $10.00 | 2.5x | 20x | 400k | 128000 | Fast | 1481 |
gemini-2.5-pro-preview-05-06 | $1.25 | $10.00 | 2.5x | 20x | 1M | 8192 | Slow | 1460 |
gemini-2.5-pro-preview-06-05 | $1.25 | $10.00 | 2.5x | 20x | 1M | 8192 | Medium | 1460 |
gemini-2.5-pro-preview-06-05-thinking | $1.25 | $10.00 | 2.5x | 20x | 1M | 8192 | Slow | 1460 |
gemini-2.5-pro | $1.25 | $10.00 | 2.5x | 20x | 1M | 8192 | Medium | 1460 |
gemini-2.5-pro-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Medium | 1460 |
gemini-2.5-pro-openai | $1.25 | $10.00 | 2.5x | 20x | 1M | 32768 | Moderately fast | 1460 |
gemini-2.5-pro-openai-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Medium | 1460 |
chatgpt-4o-latest | $5.00 | $15.00 | 10x | 30x | 128k | 16384 | Moderately fast | 1442 |
grok-4-latest | $3.00 | $15.00 | 6x | 30x | 256k | 16384 | Moderately fast | 1429 |
qwen3-235b-a22b-instruct-2507 | $0.29 | $1.14 | 0.572x | 2.286x | 128k | 8192 | Moderately fast | 1428 |
openrouter/deepseek/deepseek-r1-0528-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 163k | 8192 | Slow | 1424 |
openrouter/deepseek/deepseek-r1-0528 | $0.50 | $2.18 | 1x | 4.36x | 163k | 8192 | Slow | 1424 |
deepseek-reasoner | $0.55 | $2.19 | 1.1x | 4.38x | 128k | 16384 | Slow | 1417 |
grok-3-latest | $3.00 | $15.00 | 6x | 30x | 131k | 8192 | Moderately fast | 1409 |
gemini-2.5-flash-preview-05-20 | $0.15 | $0.60 | 0.3x | 1.2x | 1M | 8192 | Fast | 1408 |
gemini-2.5-flash-preview-05-20-thinking | $0.15 | $3.50 | 0.3x | 7x | 1M | 8192 | Moderately fast | 1408 |
gemini-2.5-flash-preview-05-20-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fast | 1408 |
gemini-2.5-flash-preview-05-20-thinking-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Medium | 1408 |
gemini-2.5-flash | $0.30 | $2.50 | 0.6x | 5x | 1M | 8192 | Fast | 1408 |
gemini-2.5-flash-thinking | $0.30 | $2.50 | 0.6x | 5x | 1M | 8192 | Moderately fast | 1408 |
gemini-2.5-flash-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fast | 1408 |
gemini-2.5-flash-thinking-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Medium | 1408 |
gpt-4.1 | $2.00 | $8.00 | 4x | 16x | 1M | 32768 | Moderately fast | 1407 |
qwen3-235b-a22b-thinking-2507 | $0.29 | $2.86 | 0.572x | 5.714x | 128k | 8192 | Medium | 1401 |
claude-sonnet-4-20250514-thinking | $3.00 | $15.00 | 6x | 30x | 200k | 8192 | Moderately fast | 1398 |
o4-mini | $1.10 | $4.40 | 2.2x | 8.8x | 200k | 100000 | Moderately fast | 1397 |
openrouter/deepseek/deepseek-r1 | $0.75 | $2.40 | 1.5x | 4.8x | 64k | 8192 | Slow | 1394 |
openrouter/deepseek/deepseek-r1-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 128000 | Slow | 1394 |
siliconflow/deepseek-ai/DeepSeek-R1 | $0.57 | $2.29 | 1.14x | 4.57x | 64k | 8192 | Slow | 1394 |
gemini-2.5-flash-preview-04-17 | $0.15 | $0.60 | 0.3x | 1.2x | 1M | 8192 | Fast | 1392 |
gemini-2.5-flash-preview-04-17-thinking | $0.15 | $3.50 | 0.3x | 7x | 1M | 8192 | Moderately fast | 1392 |
gemini-2.5-flash-preview-04-17-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fast | 1392 |
gemini-2.5-flash-preview-04-17-thinking-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Medium | 1392 |
claude-3-7-sonnet-20250219-thinking | $3.00 | $15.00 | 6x | 30x | 200k | 8192 | Moderately fast | 1385 |
claude-sonnet-4-20250514 | $3.00 | $15.00 | 6x | 30x | 200k | 8192 | Moderately fast | 1385 |
qwen3-30b-a3b-instruct-2507 | $0.11 | $0.43 | 0.214x | 0.858x | 128k | 8192 | Fast | 1379 |
qwen3-30b-a3b-thinking-2507 | $0.11 | $1.07 | 0.214x | 2.142x | 128k | 8192 | Moderately fast | 1379 |
gemini-2.5-flash-lite-preview-06-17 | $0.10 | $0.40 | 0.2x | 0.8x | 1M | 8192 | Fastest | 1377 |
gemini-2.5-flash-lite-preview-06-17-thinking | $0.10 | $0.40 | 0.2x | 0.8x | 1M | 8192 | Fast | 1377 |
gemini-2.5-flash-lite-preview-06-17-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fastest | 1377 |
gemini-2.5-flash-lite-preview-06-17-thinking-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fast | 1377 |
openrouter/qwen/qwen3-30b-a3b-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 41k | 8192 | Fast | 1377 |
gpt-4.1-mini | $0.40 | $1.60 | 0.8x | 3.2x | 1M | 32768 | Fast | 1372 |
qwen-max-latest | $2.86 | $8.57 | 5.714x | 17.142x | 32k | 8192 | Medium | 1372 |
openrouter/qwen/qwen3-235b-a22b-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 41k | 8192 | Moderately fast | 1372 |
claude-3-7-sonnet-20250219 | $3.00 | $15.00 | 6x | 30x | 200k | 8192 | Moderately fast | 1369 |
claude-3-5-sonnet-20241022 | $3.00 | $15.00 | 6x | 30x | 200k | 8192 | Moderately fast | 1366 |
gemini-2.0-flash | $0.10 | $0.40 | 0.2x | 0.8x | 1M | 8192 | Fast | 1364 |
openrouter/google/gemma-3-27b-it-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 8192 | Fast | 1363 |
grok-3-mini-latest | $0.30 | $0.50 | 0.6x | 1x | 131k | 8192 | Fast | 1359 |
gemini-2.0-flash-exp-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fast | 1358 |
deepseek-chat | $0.27 | $1.10 | 0.54x | 2.2x | 128k | 8192 | Moderately fast | 1356 |
gemini-2.0-flash-lite-preview | $0.07 | $0.30 | 0.15x | 0.6x | 1M | 8192 | Fastest | 1351 |
gemini-2.0-flash-thinking-exp-01-21-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 32768 | Fast | 1351 |
gemini-2.0-flash-lite-preview-02-05-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Fastest | 1351 |
gemini-1.5-pro-latest | $1.25 | $5.00 | 2.5x | 10x | 2M | 8192 | Medium | 1350 |
gemini-1.5-pro-latest-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 2M | 8192 | Slow | 1350 |
mistral-small-latest (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 8192 | Medium | 1348 |
openrouter/qwen/qwen3-32b-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 41k | 8192 | Fast | 1346 |
o3-mini | $1.10 | $4.40 | 2.2x | 8.8x | 200k | 100000 | Moderately fast | 1344 |
qwen-plus-latest | $0.11 | $0.29 | 0.228x | 0.572x | 131k | 8192 | Fast | 1344 |
gpt-4o | $2.50 | $10.00 | 5x | 20x | 128k | 16384 | Moderately fast | 1343 |
openrouter/qwen/qwq-32b-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 8192 | Fast | 1336 |
gpt-4o-2024-11-20 | $2.50 | $10.00 | 5x | 20x | 128k | 16384 | Moderately fast | 1318 |
gemini-1.5-flash-latest | $0.07 | $0.30 | 0.15x | 0.6x | 1M | 8192 | Fast | 1280 |
gemini-1.5-flash-latest-free (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 1M | 8192 | Moderately fast | 1280 |
gpt-4.1-nano | $0.10 | $0.40 | 0.2x | 0.8x | 1M | 32768 | Fastest | 1279 |
gpt-4o-mini | $0.15 | $0.60 | 0.3x | 1.2x | 128k | 16384 | Fast | 1269 |
mistral-large-latest (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 8192 | Medium | 1260 |
claude-3-5-haiku-20241022 | $1.00 | $5.00 | 2x | 10x | 200k | 8192 | Fastest | 1246 |
gpt-5-mini | $0.25 | $2.00 | 0.5x | 4x | 400k | 128000 | Fast | 0 |
gpt-5-nano | $0.05 | $0.40 | 0.1x | 0.8x | 400k | 128000 | Fastest | 0 |
open-mistral-nemo (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 8192 | Fast | 0 |
pixtral-12b-2409 (Free) | $0.01 | $0.01 | 0.02x | 0.02x | 128k | 8192 | Fast | 0 |
third-party-model | $0.00 | $0.00 | 1x | 1x | 128k | 8192 | Medium | 0 |
qwen-turbo-latest | $0.04 | $0.09 | 0.086x | 0.172x | 1M | 8192 | Fastest | 0 |
qwen-flash | $0.02 | $0.21 | 0.042x | 0.428x | 1M | 8192 | Fastest | 0 |
google-translate | $0.00 | $0.00 | 0.01x | 0.01x | 100K | 4096 | Fastest | 0 |
deepl-translate | $2.50 | $2.50 | 5x | 5x | 100K | 4096 | Fastest | 0 |
Please note that while the LMSYS Score represents the general capability of the model, the actual performance for translation tasks may vary for some language pairs. But generally, Gemini models are the best choice for non-English translation.
The total token cost for a translation request is calculated by multiplying the number of input tokens by the input token price and multiplier, and the number of output tokens by the output token price and multiplier. The sum of these costs gives the final token price for the translation.
You can view the estimated token usage and price for each translation request in the "Translation Cost" section when creating a new translation project. If you have a paid subscription plan, you may have a daily token allowance that you can use for translations without incurring additional costs.
Note: Basic plan users get a 5% discount on token usage, and Premium plan users get a 10% discount on token usage.
Example 1: Using Claude-Haiku model with 1000 input tokens and 2000 output tokens:
Input token = 1000 * 0.5 = 500 Output token = 2000 * 2.5 = 5000 Total token = 500 + 5000 = 5500
Example 2: Using GPT-3.5-TURBO model with 1000 input tokens and 2000 output tokens:
Input token = 1000 * 1.0 = 1000 Output token = 2000 * 3.0 = 6000 Total token = 1000 + 6000 = 7000
Example 3: Premium plan users using Claude-Haiku model with 1000 input tokens and 2000 output tokens:
Input token = 1000 * 0.5 = 500 Output token = 2000 * 2.5 = 5000 Total token = (500 + 5000)*0.9 = 4950
We use Anthropic's cache service for Claude models and cache the system prompt for each translation request, this can significantly reduce the token usage and cost for subsequent translations if you use the same model and settings and have a long additional context and system prompt. The following table shows the pricing information for cache operations:
For more information about prompt caching, see Anthropic's blog post on prompt caching
Model | Cache Writes Token Price | Cache Hits Token Price | Cache Writes Token Multiplier | Cache Hits Token Multiplier |
---|---|---|---|---|
claude-3-5-sonnet-20241022 | $3.75 | $0.30 | 7.5x | 0.6x |
claude-3-5-haiku-20241022 | $1.25 | $0.10 | 2.5x | 0.2x |
claude-3-7-sonnet-20250219 | $3.75 | $0.30 | 7.5x | 0.6x |
claude-3-7-sonnet-20250219-thinking | $3.75 | $0.30 | 7.5x | 0.6x |
claude-sonnet-4-20250514 | $3.75 | $0.30 | 7.5x | 0.6x |
claude-sonnet-4-20250514-thinking | $3.75 | $0.30 | 7.5x | 0.6x |
Our platform offers several transcription models with different cost structures and performance characteristics. The table below compares these models based on token cost per minute, processing speed, word error rate, and language support.
Model | Tokens/Minute | Speed | Word Error Rate | Languages |
---|---|---|---|---|
AssemblyAI Slam-1 (English Only) | 12,000 | Medium | 8.6% | English only |
AssemblyAI Universal | 12,000 | Medium | 8.7% | 102 languages |
Whisper Large v3 | 8,000 | Medium | 10.3% | 99 languages |
Wizper | 4,000 | Fast | 10.3% | 99 languages |
Mistral Voxtral Mini | 2,000 | Fast | 11.2% | Multilingual |
AssemblyAI Nano | 5,000 | Fast | 12.7% | 102 languages |
For general use, we recommend Whisper Large v3 for the best balance of accuracy and cost. For English-only content requiring highest accuracy, AssemblyAI Slam-1 is recommended. For cost-sensitive applications, AssemblyAI Nano offers good performance at a lower price point.
Yes, you can choose to use your own OpenAI/Claude/Gemini API key for translation. If you provide an API key, the server will use it for translation, and your account Token balance and usage will not be affected. If you use a paid subscription plan, you can also use the translation service without providing an API key and without using Token balance, until your Token quota is exhausted.
To get an OpenAI API key, you need to register an account on the OpenAI website at openai signup and openai api-keys . Once you have the API key, you can use it in the translation settings to enable the use of the OpenAI API for translations. This website will not store your API key or use it in any other way than for the translation process. It is recommended to revoke your API key once in a while to ensure the security of your account.
To get a free Gemini API key, visit https://aistudio.google.com/app/apikey. Follow the instructions to generate your API key. Once you have the API key, you can use it in the translation settings to enable the use of the Gemini API for translations.
The Whisper transcription feature is a new addition to our service that allows you to automatically transcribe audio and video files. It uses OpenAI's Whisper model to generate accurate transcriptions in multiple languages. This feature is particularly useful for creating subtitles for videos, transcribing interviews, or converting speech to text for various purposes. You can access this feature in the 'Audio Transcriber' section of our website.
Try it now!The batch translation feature allows you to translate multiple subtitle files in one project. Here are the key points:
File Limits:GPT Subtitler supports translation between all languages supported by the language models, including English, Chinese, Spanish, Japanese, Korean, and more. It can even translate some fictional languages, such as Elvish or Valyrian. When creating a new translation project, you can select the source and target languages from the provided dropdown menus. If your desired language is not listed, you can select "Other" and manually enter the language. Please note that the quality of translation may vary depending on the language pair and the model used. Currently, this project is optimized for English to Chinese translation, other language pairs may have different effects.
The backbone of the translation service of our website is available in GitHub GPT_subtitles , but in short, the translations are processed by sending the subtitle content in batches to the OpenAI/Claude/Gemini API for processing. The prompt is carefully crafted to guide the model to generate accurate translations. We also utilize additional context provided by the user and few-shot examples to improve the translation quality. The API generates the translated text based on the input content and the specified settings, such as source and target languages, prompt, and model. The translated text is then displayed in real-time on the website, allowing you to monitor the progress and download the translated subtitles once the process is complete.
Few-shot examples are a small set of example translations provided to the AI model to guide and improve the quality of the generated translations. By including a few relevant examples of how specific phrases or sentences should be translated from the source language to the target language, the model can better understand the desired translation style, tone, and context. This approach helps to produce more accurate and contextually appropriate translations, especially for domain-specific or idiomatic expressions. The minimal required format for few-shot examples is as follows:
[ { "input": { "current_batch_subtitles": [ { "index": 1, "original_text": "This is a sample subtitle." } ] }, "output": { "current_batch_subtitles_translation": [ { "index": 1, "original_text": "This is a sample subtitle.", "first_translation": "这是一个示例字幕。", "translation": "这是一个示例字幕。" } ] } } ]
Optionally, you can include 'previous_batch_subtitles' and 'next_batch_subtitles' fields to provide additional context for the translations. These fields should be placed inside the 'input' object, alongside the 'current_batch_subtitles' field. The structure for these optional fields is as follows:
{ "previous_batch_subtitles": [ { "index": 1, "original_text": "Previous subtitle 1", "first_translation": "之前的字幕1", "translation": "之前的字幕1" } ], "next_batch_subtitles": [ { "index": 3, "original_text": "Next subtitle 1" } ] }
Source Language: (fill in here) Target Language: (fill in here) I need few-shot examples for translating subtitles from [Source Language] to [Target Language]. Please provide 5-7 examples that showcase common subtitle structures, idiomatic expressions, and cultural references. Each example should include the original text in [Source Language], its translation in [Target Language], and any relevant context or explanation. The examples should be formatted as follows: { "input": { "current_batch_subtitles": [ { "index": 1, "original_text": "[Source Language Text]" } ] }, "output": { "current_batch_subtitles_translation": [ { "index": 1, "original_text": "[Source Language Text]", "translation": "[Target Language Translation]" } ] } } Please ensure the examples cover a range of scenarios typical in subtitle translation. After providing the individual examples, please combine all the examples into a single JSON array structure, like this: [ { "input": { ... }, "output": { ... } }, { "input": { ... }, "output": { ... } }, ... ] This combined JSON structure should include all the examples you've created, making it easy to use as a complete set of few-shot examples for the translation model.
Based on experience, Gemini-pro generally offers the best performance for translation tasks. However, the optimal model can vary depending on the specific language pair. If Gemini-pro's performance doesn't meet your expectations, try the following models in this order:
1. Gemini-pro 2. Claude-3.5-sonnet 3. GPT-4o 4. GPT-4o-mini 5. Gemini-flash 6. Claude-haiku
It's recommended to test these models with a smaller batch of text to determine which works best for your specific use case. Note that DeepSeek models are exceptionally good and cost-effective for Chinese translation but may not perform as well for other languages.
Recommended settings:
1. Batch Size: Use a higher batch size to provide more context for the translation. 2. Enable Context Memory: This helps the model better understand the video context and produce more accurate translations, although it may slightly increase token usage. 3. Fluent Merging: Beneficial when using the Whisper model to generate subtitles. It helps handle cases where sentences are broken across multiple lines, creating more natural translations. 4. Few-shot examples: These can significantly improve translation accuracy by providing guidance to the model. As these are language-specific, you may need to create custom examples for your language pair.
A setting incorporating many of these features is available at: https://gptsubtitler.com/settings/8dd9a960-cb30-4141-8b23-d3944d97773e. You can access and modify it to fit your specific needs.
In the settings modal, click on the 'Share' button next to your setting. You can then review and edit your setting, after publish the setting, you can use "copy link" to get a shareable link.
Yes! You can access the translation settings by opening link shared by others, bookmark ones you like, and load them into your translation settings.
Go to your account settings. You'll find an option to edit your username next to your username. If you haven't set one, a random username will be assigned which you can then change.
Custom instructions in the Additional Context field give the AI model specific guidance on how to translate your subtitles. Here are some tips for writing effective instructions:
1. Be specific and clear: Instead of "make it sound good", try "translate using casual language appropriate for a teenage audience".
2. Focus on one aspect at a time: Separate formatting instructions ("preserve line breaks") from style instructions ("use formal language").
3. Provide examples when possible: If you want specific terms translated in a certain way, include examples.
4. Consider cultural context: If the content has cultural references, specify how to handle them ("explain Japanese cultural references briefly in parentheses").
5. Use the Instruction Library: Access pre-made instruction templates by clicking the "Instruction Library" button below the Additional Context field.
Remember that instructions should be in the target language or English, as they're meant to guide the AI, not to be translated themselves.
If you have any further questions, feedback, or need assistance, please feel free to contact me at support@gptsubtitler.com . I will be happy to help you with any queries or issues you may have.
I'm sorry that we didn't meet your expectations. We support refunds within 7 days after payment. Please contact me through the email you used when registering (support@gptsubtitler.com), and I will do my best to resolve your issue.
Join thousands of content creators who are already using GPT Subtitler to reach global audiences.