Driving Data
What is Driving Data?
Using Video or Audio as Driving Data
Uploading Driving Data
Using Multiple Driving Data Files
Preparing Your Media in an NLE Before Using DeepEditor
What is Driving Data?
Driving Data is the new performance that will be transferred to your Source media.
Driving Data can come from alternate takes, alternate shots, or ADR and can either be an audio or video file.
Using Video or Audio as Driving Data
You can use video or audio as Driving Data.
- If you want to use a video file, we will use the video to "copy and paste" the actor's performance onto the Source Media. The video will drive the new performance, while the audio will be the audio played in the final vub.
- If you want to use an audio file as Driving Data, we will use that audio to generate the performance.
Here are the requirements for each:
Audio: you need clean dialogue of the new performance. Compared to video you have a few more options depending on the use case:
- For localisation, you can have the new recording from the localization artist.
- For visual ADR, you could have a recording from the voice over booth or you could grab the audio from your rushes.
Video: the video file needs to be of the actor delivering the new line, their face and mouth must be clearly in view and they cannot be occluded at any point in the video.
In the example below, the video driving data can be used successfully to vub the blonde actress.
IMPORTANT: Make sure your Driving Data has the same frame count and frame rate as your Source Media. If it is an audio file, make sure it matches your Source Media Clip exactly in your non-linear video editor.
TIP: If you do not have a clear visual of the actor for your driving data, create an audio file from the video file.
Using a Driving Data Video with Audio: Choosing Between Facial and Audio Performance
When uploading a driving data file, it’s essential that the file includes audio. The audio represents the new dialogue, it's the foundation for generating the visual dub.
If your driving data file is a filmed performance of the actor delivering that new dialogue, you have two options when generating a vub for how DeepEditor can use it:
Option 1: Use Audio Only
DeepEditor will analyze the audio track in the uploaded driving data and generate a new facial performance based on the sync, emotion, and phrasing of the dialogue.
Use this if you want DeepEditor to generate a new facial performance based on the audio only.
Option 2: Use Video Track
DeepEditor will copy the actor’s facial performance, including mouth shapes, expressions, and timing directly from the visual face track in the uploaded video. These expressions will then be applied to the face in the source shot.
Use this if the filmed driving data includes a great facial performance you'd like to replicate exactly in the final vub.
See “Supported Media Formats” for more information.
Uploading and Managing Driving Data
Driving data is uploaded in the shot media library (the Media tab). You can drag and drop it into the box or click Add Driving Data.
Once the progress bar turns green and “Done” appears, DeepEditor will have assessed the metadata of your driving data and confirmed it is compatible with our supported formats.
If you get an error message, there is a problem with the media. Hover over the error to get more information about it and guidance on how to fix it.
The most common reason for diving data files returning an error is the frame count of the driving data file not matching the source.
- The video does not need to be at online conform resolution, a lower-resolution proxy is perfectly fine.
- The lighting, makeup, or look of the actor in the driving data file does not need to match the source or training data.
- Clear visibility of the actor’s face – The face must be fully in frame and unobstructed (no occlusions like hands, props, or hair covering it).
- Matching frame rate and count – The driving data file must have the same frame rate and frame count as the source video.
Please see Supported Media Formats and Common Issues and Solutions for more details.
Using Multiple Driving Data Files
Multiple driving data files can be uploaded to the shot media library.
This means you can generate multiple vubs from the same source, each with different dialogue being spoken.
This is useful for testing different line deliveries, creating multilingual versions, or generating alternate takes without needing reshoots.
NOTE: You will need to spend a token to generate a vub from each driving data file.
Preparing Your Media in an NLE Before Using DeepEditor
Before uploading your source and driving data to DeepEditor, we recommend working with your media in a non-linear editor (NLE) such as Avid, Premiere Pro, or DaVinci Resolve.
Why Use an NLE First?
NLEs provide essential tools for assessing the relationship between your source and driving data, ensuring smoother visual dubbing. Key benefits include:
- Playback & Scrubbing – Easily review footage in real-time and pinpoint key moments in the performance.
- Rushes Management – View all available takes and select the best source material.
- Dialogue Assessment – Compare the original performance with the new speech to ensure timing and delivery alignment.
By using an NLE first, you can assess your source and driving data selections before bringing them into DeepEditor, leading to better, more natural results. Here is a quick guide of how to do this:
- Assess your source
- Identify the shot/character you want to vub.
- Study the physical performance of the actor (head movement, eyes etc.).
- Know the frame count (duration) of the shot you are vubbing and perhaps give yourself some handles if you’re in the offline edit.
- Block out all other faces, so only the face of the actor is visible.
- Line up the driving data with the source
- Line up your driving data audio on tracks underneath your source.
- Solo the driving data audio and cover the bottom half of the actors face in the source with your hand. Loop the playback and see if you’re convinced the actor is saying the new dialogue.
Once you are satisfied that the new performance works well, export your driving data file/s and upload to DeepEditor.