Differences Between Closed Captions & Subtitles - Speak Your Language | Australia
Sydney | Melbourne | Brisbane | Adelaide | Perth | TASMANIA
Onsite Interpreting Services
How can we help you? We are more than happy to provide you with a free, no-obligation quote!
Differences Between Closed Captions & Subtitles

Differences Between Closed Captions & Subtitles

What Is Subtitle Translation?

Subtitle translation is the process of translating spoken dialogue from a video into written text in a different language. Once translated, the written subtitles are synced up with the images, so that viewers who do not understand the spoken language can understand the content by reading the subtitles. One of the most common uses of subtitle translation is for movies and TV.

 

What Are Closed Captions?

Closed captions provide a written transcript of a video’s dialogue but with the addition of text descriptions of sounds such as music, phones ringing, doors slamming, explosions etc. Subtitles assume that the viewer can hear the film’s audio but can’t understand the spoken language while closed captions assume the viewer cannot hear the audio at all and require text description for everything.

Subtitles are commonly used by people that don’t understand the audio language, captions are usually used to make audiovisual content accessible to people who are hearing impaired (or in contexts where audio is switched off or not available). While subtitles are often in a different language from the source video, closed captions are often in the same language.

Closed captions differ from open captions in that they can be switched on and off by the viewer. While open captions are always on and cannot be switched off.

 

Captions Process

The process of creating captions will vary depending on the technology available and whether the broadcast is live or delayed. For example, when captioning TV shows or movies that are not live, the captioner may watch the video while also referring to a script in order to create accurate captions. During live broadcasts, the captioner listens to the broadcast in real-time and types the words and sounds as they hear them.

If the captioner has access to Automatic Speech Recognition (ASR) technology, they may choose to use this to speed up the captioning process. The captioner will listen to the original text and repeat the words very clearly so the ASR technology cannot mishear them. They then go through and manually fix errors made by the ASR (e.g. their instead of they’re). They also add other audio information such as background noises, music and verbal nuances.

If a human captioner is not available, machine translation is also an option. Using ASR technology without human assistance is prone to contextual errors and mistakes. It will also lack some of the extra information provided by human captioners such as emotional tone as ASR technology is not differentiate (e.g. the difference between a sarcastic laugh and a surprised laugh) the difference in tone.

Subtitle Translation Process

As subtitle translation involves converting spoken content into written text in a different language, the process of subtitle creation can be quite complex. There are four key stages to subtitle translation: spotting, translation, correction and simulation.
• Spotting – The process of determining when and for how long the subtitle should display. This involves synchronising the timing of the subtitles with the dialogue so that the translated text is visible on the screen when the relevant visual content is being displayed.

• Translation – The process of translating the spoken content from the source language into written text in the target language. The translator also takes into account the length of the subtitle required to fit with the timing of the video and the number of characters permitted.

• Correction – The process of editing the translated text so that it flows naturally and contains accurate punctuation. Consideration must be given to the way text is split over frames so they can be easily understood while staying in sync with the spoken dialogue.

• Simulation – The final stage of subtitling. This process involves watching the film just as it would appear to the viewer and making final edits to the text and timing.
While most subtitling is undertaken by human translators, machine subtitling is also available. However, due to the very complex process involved in creating subtitles, machine subtitling tends to be extremely inaccurate and difficult to follow.

When To Use Subtitles And Captions?

The difference between subtitles and captions is their purpose. Subtitles are designed to allow people to understand audiovisual content in another language while captions are designed to make audiovisual content accessible for hearing impaired people.

The decision on when to use subtitles or captions will always depend on the objective and type of video, the audience and the context in which it is being shown.
Some examples of when to use captions include:
• When the audience is deaf or hard of hearing
• When audio is switched off (e.g. waiting rooms, airports)
• Live events (e.g. Webinars/Conferences/Presentations)

Some examples of when to use subtitles include:
• When the audience does not understand the spoken language (e.g. foreign films)
• When the content does not contain relevant audio information other than the dialogue
• Pre-recorded content (e.g. TV shows and movies)

Captions and subtitles can make the content more accessible and help capture the audience’s attention. In fact, adding captions to a video has been shown to boost engagement, helping viewers retain more information and focus on the content for longer periods.

Subtitles and captions are essential tools for making videos accessible to speakers of other languages or hearing impaired audiences. They can also help boost engagement and improve the viewers’ enjoyment and understanding of the video. For high-quality subtitle and caption services contact Speak Your Language today.