For what purpose?
I don't think that I worded my question very well.
I have a source video that is 1.2 GB for a 50 second clip that I am trying to share with friends. It looks great (obvsiously) but that's not going to work as it's FAR too large. I want to achieve a decent compression on the video, however, all of the codecs that I am using render in studio luma (16-235) instead of computer/digital luma (0-255). This is making my video look washed out, which is bugging the hell out of me. I am looking for a way to render out into a well supported container (avi, mpeg2, mov, etc) with a codec that supports a computer luma range. It would be even better if It supported 4:2:2 chroma subsampling, but for right now the most important thing for my sanity and the health of my computer (man that window is looking awfully enticing right now) is to figure out how to render out with the computer luma range.
I also have AE as well just in case that turns out to be a necessary program in order to be able to do this.
Thank you for your help
Here are some pretty graphs to illustrate my point
YC Waveform Luma before render:
YC Waveform Luma After render:
YC Waveform Chroma before render:
YC Waveform Chroma after render:
YCbCr Parade before render:
YCbCr Parade after render:
Thank you for all of your help
(I didn't embed the images becuase they were messing up the way the forum was displaying for me. I can embed them if it is causing a problem for anyone)
Boy, I'm really confused.
You want to play a video and then 'screen capture' it in the hope of making a video with just as good (or better) colour yet a smaller size?
If the files you are exporting are too large then you need to do one of two things.... or possibly both.
1) Reduce the dimensions of the video - so instead of 1080p send 720p or even 540p etc.
2) Use a higher compression ratio. Use H264 / MP4 for probably the best compression ratio vs quality of video.
4:2:2 is large. If you want to distribute smaller file sizes then you need to go the 4:2:0 route.
4:2:0 vs 4:2:2 has nothing to do with 16-235 / broadcast safe. It's merely the colour sampling system.
50 second clips exported at 5mbps are quite small, even at 720p.
Perhaps you could give us a size target to hit and then maybe people can give you some compression suggestions.... ?
I'm sorry. I'm not screen capturing a video. I'm demonstrating something in a game then using fraps to capture the footage. Using that footage, I am trying to edit it and then make a short video with it. FRAPS captures everything that is happening on your video screen, so the native luma scale in the video is 0-255, unlike video camcorders which are mostly 16-235. I can deal with having worse chroma subsampling at 4:2:0 since it doesn't look like there are a lot of free options for me at 4:2:2 to use in a widely used container.
I agree that h264 is a great codec with magnificent compression rates, but I can't figure out how to make it render outside of broadcast safes in the 0-255 range. So I guess after lots of reading, my question is turned into, How can I render a video out of Premiere that uses 0-255 as its Luma range.
It is entirely possible that I am getting my question wrong, but I hope that the YC waveform and YCbCr parade images can help someone with a lot more knowledge on the subject help me out If I'm looking down the wrong path.
No, you are asking the right question, and I understand your problem. I just can't recreate it.
I just took a really wild Quicktime video I downloaded from Videoblocks.com that isn't anywhere close to being broadcast safe, and I exported it to a Windows Media file.
The first picture is a screenshot from the video. (it is a set of lower thirds in one video - I am just supposed to use one at a time of course.
The second picture is the Reference monitor from frame 12;24
I exported the sequence to a Windows Media file and then imported it, putting it right on top of the other video on the sequence so I could make sure to be on the exact same frame. The third picture is the reference monitor from the WMV file on the sequence. As you can see, there is not much difference. Some, but you have to look closely.
So, my question to you, since I do not use FRAPS, is what are your sequence setting? Did you just drop a clip on the New button to create a new sequence with the exact right settings? And, are you making the mistake of using the renders to help you export?
The problem is that I don't understand your source material. I guess I could download FRAPS from somewhere and give it a try? I am just a bit busy with other things today.
If you want 4:2:2 color space then just use mpeg-2 and set the profile setting to 4:2:2.
I think I may have kinda figured out what was going on. I think that the codecs I was using were messing with my luma values. I forgot to mention that I was using the MPEG-2 4:4:2 codec and color space yesterday. I think it was "spreading" (I have no Idea what the proper term is for it, I told you - very new to editing) my luma values out. You can kinda see what I'm talking about in the comparison of the YCbCr parades and YC waveforms that I posted yesterday with how there are fewer defined areas and more "mush" (again, no idea what the proper terminology is)
I rendered out with h264 codec and the luma values look great.They are almost identical to the source. The chroma values are obviously different, but that's a separate issue.
This does bring up serveral more questions on for me. How can you tell the 0-255 values from the IME values on the graphs? I know that NTSC is 7.5-100, but does that mean that it equates to 16-235 or can you not get those values from the different graphs?
The next thing I'm wondering is if the range of the signal components on the right side of the graphs represents the range of signals actually represented, or does it represent what the maximum range is defined to be for the entire video clip? I'm wondering because the range has actual INCREASED for the MPEG-2 render, which is mystifying to me.
I made some tests with bars - black, gray and white - which read as 0 IRE, 50 IRE and 100 IRE. I exported a PNG and took that into Photoshop. The colors were confirmed as 0, 128 and 255 RGB.
I then exported the test chart out to H.264 Blu-ray, H.264, MPEG2 Blu-ray, MXF OP1a and F4V. In all cases, the luma values were unaltered.
What did your vectorscope look like before and after?
I rendered out a set of HD bars with the h264 codec with the hd 1080p 29.97 preset and my vector scopes looked like this:
Is this just a normal function of changing color space and chrmoa subsampling or do I have something funky going on?
Sorry, My sequence settings are as follows:
editing mode: Custom
Frame size: 1600x900
Pixel Aspect Ratio: Square
Fields: Progressive scan
Display format: 30 FPS timecode
Audio: Sampe rate 44100 Hz
Display Format: Audio Samples
Preview File Format: I-frame only Mpeg
Codec Mpeg I-frame
width Height: 1600x900
I see the same. My guess is the color bars generated by PP are RGB, 4:4:4, 32 bit floating point color depth, and that the conversion to YUV 4:2:2 or 4:2:0 during export generates errors. This was conformed by exporting out a UT version in RGB space, as well as a PNG. Both showed very little difference in the vectorscope, and I suspect that slight difference may be due to the limits of 8 bit color. If UT allowed 10 or 12 bit color, I suspect there'd have been no difference at all.