As I understand it, Premiere Pro handles color and luminance in a variety of ways depending on what you are doing with the video.
For example, with cuts-only editing, P Pro upsamples YUV (Y Cb Cr) video from its native native sample rate (4:2:2, 4:1:1, 4:2:0) to 4:4:4. Then converts that on-the-fly to RGB for display on computer monitors but retains the upsampled YUV for actual editing.
That said, when applying non-YUV tagged video effects or transitions, the YUV video is converted to RGB, the effects or transitions are applied, and then it is converted back to YUV losing some quality along the way. Correct? When working with YUV tagged effects, there is no intermediary conversion to RGB, which is a good thing.
But beyond that, does all video work in the YCbCr color space? What happens when transcoding? Is there an optimum workflow?
Also, what about the YCbCr Parade scope? The YC Waveform, RGB Parade and Vectorscope are all intuitive largely because P Pro's display and effects are RGB oriented (despite the numerous effects with YUV tags). So, what is the best use and approach when working with the YCbCr Parade scope?
Your understanding is mostly correct, a few notes:
-We do support importing and preserving YUV data in its native format through all of our rendering pipeline and many effects.
-We will only upsample to 4:4:4 when we need to process the data in that form, if you have 4:2:2 data at the start, are exporting to a format with similar requirements, and have nothing in between, we will generally keep the data as 4:2:2.
-Yes, effects and transitions without the YUV badge need to convert to RGB to process, then we go back to YUV if any further processing is needed. There is no intermediate conversion to RGB for YUV badged effects. Effects & transitions that work in 32-bit float but not YUV can are also important as 32-bit float RGB generally supports the precision and range to cover all 8-bit YUV values. Note that all CUDA accelerated effects are always 32-bit float, even where the software equivalent is not (eg crop).
-During transcoding we generally base the working format on the export type. If the exporter supports YUV the render will take place in YUV. A few formats like JPEG use RGB data for the render as that is what they support.
-Scopes are generated using YUV data and so will include all the detail not possible with RGB.
Thanks for clearing this up. I have a few follow-up questions:
1) Would you clarify the conversion process a bit? Does the YUV-RGB and RGB-YUV conversion happen only during export transcoding? What happens during playback from the timeline or from rendered sequence segments? On-the-fly conversion to RGB?
2) If a non-YUV effect or transition is used anywhere in a sequence, is the entire sequence converted to RGB during transcoding?
3) What is lost during the conversion from YUV to RGB and back to YUV?
4) I believe all video effects work in the RGB color space. Correct? Or at least there controls are RGB oriented in that none refer to Cb or Cr. So, what value does the YCbCr Parade scope provide? Would you explain a color correction process that uses the YCbCr scope?
1) Generally the same color processing occurs during playback as export. Where possible we process and preserve YUV data all the way until display where we will convert on the fly.
2) Only the track items containing non-YUV effects will be converted to RGB, and only for the non-RGB effects.
3) Information can be lost in two ways. First some YUV values will map to negative RGB values or RGB values greater than 255 and we will clamp to those limits. Second a given YUV pixel value may map to a fractional RGB pixel value at which point we have to round to the nearest. 32-bit floating point RGB does not have either of these problems.
4) Many effects work directly in YUV. If you are looking for an example of chroma showing up in an effect's UI, video limiter would be one example. How we present controls to an effect is not always directly correlated to how it is rendered. Using the scopes to monitor YUV values has several purposes. Most output formats are still YUV so you will be monitoring what your output will be. Also depending on your target, ensuring broadcast legal values may be important. Further if you are trying to adjust something like the brightness of a scene the luma channel may be the most important value to measure.