Driving Data
Driving Data is the new performance that will be transferred to your Source media.
This article covers the following:
Using Video or Audio as Driving Data
Uploading Driving Data
Using Multiple Driving Data Files
Preparing Your Media in an NLE Before Using DeepEditor
Driving Data can come from alternate takes, alternate shots, or ADR and can either be an audio or video file.
Using Video or Audio as Driving Data
You can use video or audio as Driving Data.
- If you upload a video file and it contains both video and audio, we will use the video to "copy and paste" the actor's performance onto the Source Media. The video will drive the new performance, while the audio will be the audio played in the final vub.
- If you upload an audio file as Driving Data, we will use that audio to generate the performance.
Here are the requirements for each:
Audio: you need clean dialogue of the new performance. Compared to video you have a few more options depending on the use case:
- For localisation, you can have the new recording from the localisation artist.
- For visual ADR, you could have a recording from the voice over booth or you could grab the audio from your rushes.
Video: the video file needs to be of the actor delivering the new line, their face and mouth must be clearly in view and they cannot be occluded at any point in the video.
IMPORTANT: Make sure your Driving Data has the same frame count and frame rate as your Source Media. If it is an audio file, make sure it matches your Source Media Clip exactly in your non-linear video editor.
TIP: If you do not have a clear visual of the actor for your driving data, create an audio file from the video file.
See “Supported Media Formats” for more information.
Uploading Driving Data
Driving data is uploaded in the shot media library (the Media tab). You can drag and drop it into the box or click "Add Driving Data" and browse for your media.
Once the progress bar turns green and “Done” appears, DeepEditor will have assessed the metadata of your driving data and confirmed it is compatible with our supported formats.
If you get an error message, there is a problem with the media. Hover over the error to get more information about it and guidance on how to fix it.
The most common reason for diving data files returning an error is the frame count of the driving data file not matching the source.
Please see Supported Media Formats and Common Issues and Solutions for more details.
Using Multiple Driving Data Files
Multiple driving data files can be uploaded to the shot media library.
This means you can generate multiple vubs from the same source, each with different dialogue being spoken.
This is useful for testing different line deliveries, creating multilingual versions, or generating alternate takes without needing reshoots.
NOTE: You will need to spend a token to generate a vub from each driving data file.
Preparing Your Media in an NLE Before Using DeepEditor
Before uploading your source and driving data to DeepEditor, we recommend working with your media in a non-linear editor (NLE) such as Avid, Premiere Pro, or DaVinci Resolve.
Why Use an NLE First?
NLEs provide essential tools for assessing the relationship between your source and driving data, ensuring smoother visual dubbing. Key benefits include:
- Playback & Scrubbing – Easily review footage in real-time and pinpoint key moments in the performance.
- Rushes Management – View all available takes and select the best source material.
- Dialogue Assessment – Compare the original performance with the new speech to ensure timing and delivery alignment.
By using an NLE first, you can assess your source and driving data selections before bringing them into DeepEditor, leading to better, more natural results. Here is a quick guide of how to do this:
- Assess your source
- Identify the shot/character you want to vub.
- Study the physical performance of the actor (head movement, eyes etc.).
- Know the frame count (duration) of the shot you are vubbing and perhaps give yourself some handles if you’re in the offline edit.
- Block out all other faces, so only the face of the actor is visible.
- Line up the driving data with the source
- Line up your driving data audio on tracks underneath your source.
- Solo the driving data audio and cover the bottom half of the actors face in the source with your hand. Loop the playback and see if you’re convinced the actor is saying the new dialogue.
Once you are satisfied that the new performance works well, export your driving data file/s and upload to DeepEditor.