Media and Vubs FAQs
Frequently Asked Questions about Media and Vubs.
Table of Contents
What are the vub quality levels available in DeepEditor, and how should I use them?
How long does each step in the vub creation process take?
How long does it take for a vub to be generated?
Does the audio in the Source media matter?
What is the difference between using Video Driving Data and Audio Driving Data?
How do I mute a character and stop them from saying a line?
What happens if the driving data I upload is shorter or longer than my source media?
Should I upload my media before or after color grading / applying a LUT?
How do I stop a vub from generating?
My render has a glitch at the edge of the face mesh. Is there anything I can do to fix it?
What is a vub?
A "vub" refers to a visual dub or video modification created using assistive AI technology. It is the output you get when you combine Source Media and Driving Data.
Elements like dialogue, facial expressions, or even whole performances can be altered or substituted while preserving a high degree of realism.
What are the vub quality levels available in DeepEditor, and how should I use them?
Please read DeepEditor Vub Quality Levels for full details of each vub quality and how to use it.
How long does each step in the vub creation process take?
- Uploading Source, Driving Data and Training Data (optional) material: On average, it takes less than a minute. The length and size of your files may affect this time.
- Generate a vub: Generating a vub can take between 1 and 12 hours, depending on the vub quality you selected, the length of the media, and whether you have added training data. If you haven't received a notification e-mail on completion of a vub and things have taken longer than a day, check the queue to see if the vub failed. See Common Issues and Solutions for more information.
- Download/Review your new file: Your vub will reach your downloads folder in less than a minute for review or conform.
- Render (in the Refinement tool): Rendering a change to your vub can vary based on the amount of changes and the length of the clip. On average, rendering will take about 20 minutes.
How long does it take for a vub to be generated?
This depends on a number of factors, such as the resolution and length of your media and whether you included training data. Here are the average times it currently takes to generate a vub:
|
Quality Level |
Average Time to Generate |
|
Draft |
1-2 hour |
|
Draft with Training |
4-11 hours |
|
Final |
2-4 hours |
|
Final with Training |
4-11 hours |
If you are concerned about how long it is taking for your vub to be generated, please contact us at support@flawless.app.
Does the audio in the Source media matter?
No. The audio in the Source media will not affect the generated vub. Only the audio in the Driving Data will affect your vub.
What is the difference between using Video Driving Data and Audio Driving Data?
If you upload a video file as driving data and it contains both video and audio, we will not process both video and audio and pick the best source to generate the new performance.
If a video is uploaded, we will use the performance of the same actor in the video, trying to "copy and paste" to preserve as much of that video performance and transfer that to the source media. The video will drive the new performance, while the audio will be the audio played in the final vub.
If you upload audio as Driving Data, we'll use that audio to generate the performance.
How do I mute a character and stop them from saying a line?
The results of your vub will directly correspond to what audio is present in the driving data you use.
Upload driving data that is silent during the moments you wish to mute a character, and the vub result will be a neutral expression while retaining the average emotional expression for the shot.
If there are still mouth movements when there is no spoken driving data dialogue, further mute the performance by setting the driving data scale to zero. Read more about using the driving data scale feature in the refinement tool: here.
What happens if the driving data I upload is shorter or longer than my source media?
- If the driving data is longer than your source media, DeepEditor will truncate your driving data so it ends at the same frame as the source.
- If the driving data is shorter than your source media, DeepEditor will add silent audio up to the last frame of your source media and will hold the mouth position from the last frame of your driving data until the end of your source media.
Should I upload my media before or after color grading / applying a LUT?
Files can be uploaded to DeepEditor before or after color grading or applying a LUT. However, we recommend uploading ungraded files when possible, as any color decisions made beforehand will be baked into the output renders.
How do I stop a vub from generating?
Currently, there is no way to stop the process once you've clicked “Generate Vub”.
My render has a glitch at the edge of the face mesh. Is there anything I can do to fix it?
In rare cases, you may notice minor visual artifacts in a vub export, such as subtle edge issues around the face mesh. There are two recommended approaches for addressing this, depending on the severity and nature of the artifact.
Option 1: Adjust within the Refinement Session
For very minor artifacts, it can be worth revisiting the refinement session in DeepEditor.
Introducing a small amount of interpolation can allow a hint of the original plate to blend back into the result, which may help smooth fine edge detail or resolve subtle transitions.
This approach is best suited to light, localized artifacts and can often be addressed directly within the product.
Option 2: Use the VFX Turnover Package
If the artifact persists, or if finer pixel-level control is required, we recommend using the VFX turnover package included with final vub outputs.
This package is designed for 2D compositing workflows, most commonly in Nuke, and allows a visual effects artist to perform targeted beauty work, edge cleanup, or pixel-level refinements. This mirrors standard post-production practices and gives teams full control over final image polish.
While DeepEditor delivers high-quality results, no AI-based solution can be considered absolutely pixel-perfect in every scenario. For this reason, we intentionally provide both in-product refinement tools and a professional VFX handoff, ensuring you always have a path to achieve final-frame quality.
If you believe the artifact is unexpected or persistent, please contact support@flawless.app and our team will be happy to investigate further.
For any other issues, please contact us at support@flawless.app.