Skip to content
  • There are no suggestions because the search field is empty.

What is a Vub?

Understand the basics of vubbing.

A Visual Dub - a vub - is a visual adjustment to an actor's face that synchronises their mouth movements to new dialogue or an alternative performance.

Solving Your Editorial Problems

When new or replacement dialogue is required during an edit, it is typically to refine the story – to add clarity, make the story more efficient, or to add a new plot thread. The best way for an audience to absorb a line of dialogue is to see the actor say that line. However, traditional methods of applying new dialogue take on one of two forms:

  • The editor must cut elsewhere to hide the person that is saying the new line, or:
  • The actor must be reshot to see them saying the line.

Both methods are unsatisfactory, either creatively or financially. Vubbing solves this problem. Dialogue and performances can be replaced, in-vision, using existing footage, delivered with the style and intentions of the actor themselves.


Assistive AI vs. Generative AI

DeepEditor is an Assistive AI visual dubbing tool - a distinction that is central to how Flawless approaches technology.

Generative AI creates new content from scratch, often synthesising faces, voices, or performances that did not previously exist. Creative control is lost, and the ethical foundations of the source data is typically neither traceable nor provable.

Assistive AI works with existing, real footage. It enhances and refines what is already there, with the artist retaining full creative control. Source data is legitimately sourced, traceable and artist consent requirements are enabled. The creative process is enhanced, not diminished.

With DeepEditor, no new performance is invented. The actor's original footage remains the foundation - the tool simply enables that footage to be synchronised to new dialogue with precision and speed that would not otherwise be possible.


How It Works

A vub is generated in DeepEditor - available as both a web app and an Avid Extension - and two pieces of media are required:

  • Source media - the original shot containing the performance to be modified;
  • Driving Data - the new dialogue or performance that the Source media will be synced to.

Driving Data can be provided in two forms, each producing a different result:

  • Audio only - DeepEditor syncs the mouth movements to the new speech content while retaining the actor's original emotional performance;
  • Video and audio - DeepEditor replicates both the mouth movements and the emotional performance of the actor in the Driving Data.

Because the Driving Data is always derived from the actor themselves - either their audio or video - the creative output remains grounded in the artist's own performance. DeepEditor analyses the Driving Data and applies the corresponding expressions / mouth movements to the Source media to produce the vub.


Key Characteristics

  • Adjustments are applied to the area of the face below the eyes – the cheeks, the nostrils, the mouth, the jaw and the upper part of the neck.
  • Output supports resolutions up to 8K, with full ACES color workflow and 16-bit color depth, ensuring a technical quality suitable for both television and cinema.
  • Lip sync can be manually adjusted using the Refinement tool to ensure the user can achieve the result they want consistently and precisely.
  • A VFX turnover package is also available for Final Quality output, enabling image quality adjustments – if desired - in traditional software such as Nuke. We understand the desire for perfection, and on the off-chance that there are any artifacts in the output, this allows you to go that final step. 

Read Next: Use Cases