Why would i want to use 48 or 64 bit depths while rendering when max bith depth with filters are 32(PP is using 32-bit floating point)?
RGBA has four channels and each channel is usually 8 bits deep, so 32 bit in total. Since video export is without the alpha channel, that means normally 24 bit export. If you have 10 bit depth in your source material (that means at least 422 material, only found on pro cameras) AND you use an AJA or similar card for ingest, then it makes sense to keep everything at 10 bit per channel.
If your source material is 8 bit per channel (all consumer cameras, HDV, AVCHD, etc.) there is no benefit in adding a couple of zeros to the material, since it will not improve quality at all, it will only increase the processing load. So don't use maximum bit depth.
I assume the 48 and 64 bit settings are for 444 material like RED 4K and EPIC 5K, but it is an Avid specific option and I'm not yet familiar with its use.
The source material I'm working with is 12-bit 4K RED .R3D files.
So I think that means I should use at least 32-bit export and the masterfile should be 10-bit.
Do you think there would be significant visual improvement in master file if I would use 48 or 64 bit?
Because if there are only slight improvement in quality I have to use 32bit because the machine I'm working right now isn't that fast.
Thank you for your help.
For 12 bit RED material I would opt for 48 bit, but then the question arises, what is your delivery format? If it is only BD, then there is no sense in keeping that high a quality, since you will lose it all on delivery.
I will deliver it as a mov. file encoded with 10-bit DNxHD (the customer want it to be 10-bit). And it will be shown on TV and on Youtube.
So are you saying that there are really no practical quality differences when using 32 vs. 48-bit rendering IF the delivery format will be 10-bit and it will be showing in TV and Youtube?
YouTube is crap, so anything more than 8 bit depth is down-the-drain. They convert your submissions anyway. TV is equally limited to maybe 7.5 Mbps bandwidth and 8 bit depth, at least as far as broadcast over cable is concerned.
Ok, so there are no use if the endfrmat is more than 8-bit.
But does the 32 or 48-bit rendering make the footage any better?
Should I render with highest bit depth to get the best result, even if the endformat will be 8-bit? Will there be any practical difference?
Will there be any practical difference?
even if the endformat will be 8-bit?
You are confusing 8 bit endformat with 10 bit endformat.