• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

"Use Maximum Render Quality" for Youtube ..?

Contributor ,
Jan 11, 2017 Jan 11, 2017

Copy link to clipboard

Copied

Should my Youtube-destined Premiere exports "use maximum render quality"?  Thnx!

Views

29.7K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines

correct answers 1 Correct answer

LEGEND , Jan 11, 2017 Jan 11, 2017

If you have CUDA turned on, then no.

If you have CUDA turned off but aren't scaling any media, then still no.

Votes

Translate

Translate
LEGEND ,
Jan 11, 2017 Jan 11, 2017

Copy link to clipboard

Copied

If you have CUDA turned on, then no.

If you have CUDA turned off but aren't scaling any media, then still no.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 24, 2017 Jul 24, 2017

Copy link to clipboard

Copied

Jim_Simon  wrote

If you have CUDA turned on, then no.

If you have CUDA turned off but aren't scaling any media, then still no.

Hi Jim, Jim Simon​ ... I've been reading up on the render quality options... came across your answer here, as well as an answer from 2010 in the following post... Re: "Maximum Render Quality" Better to turn it OFF when using CUDA MPE?

That other post is old I'm not sure how applicable it is these days which might clarify your answer to this thread...

In that older thread, it seems that in some cases maximum render quality (MRQ) can benefit some render operations which still use the CPU despite the GPU being present. Does this no longer apply given advances in tech since that old 2010 post?

I ask because your answer seems to imply that in the presence of CUDA there is no need to check the MRQ box where as that old post seems to imply there are cases for checking MRQ despite a GPU being present... cases which Adobe was apparently interested in diminishing over time. Perhaps they've been diminished to the point where the exceptions to the rule no longer apply.

A general layperson's test that old thread suggests one can take is to render a complex portion of a timeline twice, once with/without MRQ... if the MRQ render yields a longer encode time with a GPU present, that old thread indicates the longer time means the CPU is being used because MRQ is checked... which therefore supposedly indicates additional work of value is being done in the CPU despite the GPU's presence.

I'm guessing that way of seeing things may no longer apply or that you know MRQ has specifically no benefit for YouTube as a destination. Just curious which...  Thanks!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

The thread is referring to CS5, lot has changed since.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

https://forums.adobe.com/people/Ann+Bens  wrote

The thread is referring to CS5, lot has changed since.

... but do we have any details on the specific meaning of MRQ if one has a GPU, in terms of quality gain? For example, back then that thread seemed to exist to help fill out details not in the docs... stuff that someone close to the teams would have known about... is there a current such blog of tech note to help navigate MRQ and max bit depth... I sort of get max bit depth during render time but it still would be nice to hear more about both options from Adobe... most especially MRQ. I mean, why even have it if it made zero difference if one was using a GPU supporting CUDA?

I've looked at some videos and tutorials online... some are old so not sure of relevance... other tutorials will say use MRQ to get better quality... which isn't saying much.

... and if you're uncertain of any online materials, I'm curious... do you export to h.264 and if so do you ever select MRQ? MaxBitDepth? and how about 2pass vs 1pass? (for finals... not talking about drafts here.) Thanks Ann!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

For most work, MRQ MBD aren't needed. Using them will slow down exports, and occasionally may even cause problems. Both have only a few uses these days. From a very useful source ...

Maximum Render Quality

This is a high-quality resize operation that should be used when outputting to a different frame size from your sequence. It can reduce aliasing (jagged edges) when resizing images but is of no use when outputting to the same frame size. This operation significantly increases render times so only use it when resizing.

Render at Maximum Depth

This renders content at 32-bit color depth. Very few output formats actually support 32-bit color but processing at this depth can produce better quality for compositing and effects operations before being scaled back to the output format's bit depth. It can reduce or eliminate artifacts and banding in your video but that benefit comes at the cost of an increase in processing time, so only use it when completely necessary.

You may benefit from this option in the following situations:

  • Your source media has a higher bit depth than the format you are outputting to
  • Your sequence contains heavy compositing or lots of layered effects (particularly 32-bit color effects)
  • Your sequence contains very high contrast or very low contrast images (for example subtle gradients)

Although I do a modest amount of resizing on export, and at times go a bit nuts in Lumetri, I've not found either of them very useful (as far as improving an export) since about PrPro CC2014.

Multiple-pass exports are also something I've not bothered with since about CC2014. It just doesn't do any better than the first-pass process seems to do these days.

I've tested exports using each of these imported back into the project, and looked at them with the Program monitor set to 200%, and couldn't see any improvements. I'd suggest if you're concerned, do the same tests yourself. Takes maybe half an hour to do a few quick exports  & review them.

If something slows the entire processing change down for results that are impossible to quantify, well ... it's only wasted time.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

https://forums.adobe.com/people/R+Neil+Haugen  wrote

For most work, MRQ MBD aren't needed. Using them will slow down exports, and occasionally may even cause problems. Both have only a few uses these days. From a very useful source ...

Maximum Render Quality

This is a high-quality resize operation that should be used when outputting to a different frame size from your sequence. It can reduce aliasing (jagged edges) when resizing images but is of no use when outputting to the same frame size. This operation significantly increases render times so only use it when resizing.

Render at Maximum Depth

This renders content at 32-bit color depth. Very few output formats actually support 32-bit color but processing at this depth can produce better quality for compositing and effects operations before being scaled back to the output format's bit depth. It can reduce or eliminate artifacts and banding in your video but that benefit comes at the cost of an increase in processing time, so only use it when completely necessary.

You may benefit from this option in the following situations:

  • Your source media has a higher bit depth than the format you are outputting to
  • Your sequence contains heavy compositing or lots of layered effects (particularly 32-bit color effects)
  • Your sequence contains very high contrast or very low contrast images (for example subtle gradients)

Although I do a modest amount of resizing on export, and at times go a bit nuts in Lumetri, I've not found either of them very useful (as far as improving an export) since about PrPro CC2014.

Multiple-pass exports are also something I've not bothered with since about CC2014. It just doesn't do any better than the first-pass process seems to do these days.

I've tested exports using each of these imported back into the project, and looked at them with the Program monitor set to 200%, and couldn't see any improvements. I'd suggest if you're concerned, do the same tests yourself. Takes maybe half an hour to do a few quick exports  & review them.

If something slows the entire processing change down for results that are impossible to quantify, well ... it's only wasted time.

Neil

Thanks Neil... I am generally the sort of person who will do side-by-side tests to compare results for myself but the nuanced info you have above really helps to make better choices along the way despite any personal informal findings. I mean, I could do (and actually have done) side-by-side tests with my usual DSLR source footage and draw conclusions from that, but it wouldn't give me the resizing and 32-bit effects hints (and more) reflected above... nor would it clue me in should I start using source footage having a greater depth. Or... the one that really gets me... despite my tests showing no immediate apparent benefits, I use the settings just in case my tests didn't cover the mysterious cases where there's a benefit... elaboration as you've presented, I'm saying, removes that mystery and allows one to get more money for my own personal testing. Unless of course I test for each project, wasting time each project, to compare that the project's specific cases so to speak... rather than knowing what's under the hood here. Having just a hint at least can help, is all I'm saying. Wish this were clarified release-by-release in a readme or blog or something.

So I'm working on a project right now that has some source footage (not photos but video) that is HD or other smaller odd ratios than 1080p (1080p being the output). Since those are scaled up to 1080p in the main timeline, would you recommend pre-rendering those to a mezzanine using MRQ... then pulling that into the main project... those sections don't need much ongoing work from original source so it seems taking care of MRQ upfront with a pre-render might be a good strategy for those effect "resize" operations... then avoiding the MRQ penalty on the main project. Currently, I've been going to CineForm with those but keeping at the original scaling, then scalling the Cineform in the main project... seems this is a case for maximizing scaling processing before the mezzanine (I tend to avoid over processing at that stage as I've gotten burned by doing too much, trying to be too efficient, before heading to mezzanine). What do you think on that?

Thanks for the 2pass info... that was going to be my follow-up Q. ... okay, so I'll stay away from that for now. ... I'll also take some time out to do some experiments with my particular footage... but with some of the mystery removed the tests themselves will be more meaningful. Thank you for those clarifications.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

Actually, correction... I am scaling the smaller HD and odd ratio older footage to 1080p when going to CineForm as an intermediate... but I did not use MRQ... I also held off in some cases for color correction to later usage in the main project. From what you're saying, I think using MRQ for that pre-render to intermediate might be worth it since footage is being scaled and I can just do it once. I assume holding off on lumteri until the main project which imports the intermediate is fine.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 25, 2017 Jul 25, 2017

Copy link to clipboard

Copied

I've done some work with SD video (720x486) taking it to "HiDef 720" in 2015.4 and didn't see any benefit in MRQ for the job. So I left it off. "Off" is my natural setting for both MRQ and MBD unless I know I'll get something. I've done a bit with 10-bit media, working in Lumetri, and downsizing from 4k to 1080, besides going to 8-bit, and for that I did use MBD.

Your mileage will always vary. Testing takes a few minutes, and you know what you're getting.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 26, 2017 Jul 26, 2017

Copy link to clipboard

Copied

Makes great sense... thank you!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 26, 2017 Jul 26, 2017

Copy link to clipboard

Copied

You get much experience, you will learn to be able to build yourself quick tests of anything you're thinking of trying. One of the biggest problems that people have is starting off on a big project with plans to do a bunch of things they haven't done before ... or at least, haven't done them until they can do them in their sleep.

They get two hour long sequences into the project, and ... for some reason it starts doing something they're not expecting. They've got two weeks into this now, and want ... well, perhaps, salvation? ... for all that work.

Sometimes "we" around here can help them through the project, and sometimes ... well, realistically ... the far easiest way to get something useful is to start over because you can't get anywhere from where they boxed themself in.

And after a few things blow up in your face when you've tried all the individual pieces that went into that project before, though not together in one project ... you learn that you need to test laying clips, graphics, and any other effects you'll possibly be doing on top of each other. Just as the final project will "need".

Um ... laid out this way, when you try to export, it takes an hour of render-exporting time for every thirty seconds of sequence? Well ... maybe there's a different way to do this!

You really want to figure that out before you're two weeks and six days into a 3-week deadline project ... with a hard deadline!

That, speaking from experience, is not an enjoyable experience.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 27, 2017 Jul 27, 2017

Copy link to clipboard

Copied

https://forums.adobe.com/people/R+Neil+Haugen  wrote

... when you try to export, it takes an hour of render-exporting time for every thirty seconds of sequence? Well ... maybe there's a different way to do this!...

Neil

LoL ... 1 hour per 30 seconds... okay, I agree there...

...nothing was blowing up or anything in my case... was just curious what the current perspective is on those options... it's immensely helpful to hear an experienced person such as yourself share the gist of how you see things. For myself personally hearing that sort of info never removes the need to test something out.

There are so many cases to test... I'm not an encoding expert but I read somewhere that 2pass can effect file size which can effect streaming and quality of lower bandwidth situations... that info was from some time ago... but if I do a test on my desktop, or don't have every device in the world... I have to stop testing at some boundary. I just think hearing experienced input on options like that can help make testing smarter so to speak... that says nothing of just hearing what someone's general experience is.

Also, I just noticed something... I was using AME to encode and was searching its docs and came up empty beyond a general unhelpful blurb (see below)... just now I ran across the Premiere Sequence settings docs for the same options... seems to offer a better elaboration... wondering if maybe the AME docs could ref those or something. If you see the difference below, you'll see why, after seeing the AME docs, I searched around here, saw that old clarifying thread from 5 years ago felt like asking about current wisdom... but I had not yet seen the Premiere seq docs (below)...

From the Encode and export video and audio with Media Encoder​ docs, I only found this...

...

(Optional) Select Use Maximum Render Quality or Render At Maximum Bit Depth.

Note:  Rendering at a higher color bit depth requires more RAM and slows rendering substantially.

...

From the Create and change sequences in Premiere Pro​ docs which I just found, there's a more robust overview...

Maximum Bit Depth

Maximizes the color bit depth, up to 32 bpc, to include in video played back in sequences. This setting is often not available if the selected compressor provides only one option for bit depth. You can also specify an 8-bit (256-color) palette when preparing a sequence for 8-bpc color playback, such as when using the Desktop editing mode for the web or for some presentation software. If your project contains high-bit-depth assets generated by programs such as Adobe Photoshop, or by high-definition camcorders, select Maximum Bit Depth. Premiere Pro then uses of all the color information in those assets when processing effects or generating preview files.

Maximum Render Quality

Maintains sharp detail when scaling from large formats to smaller formats, or from high-definition to standard-definition formats. Maximum Render Quality maximizes the quality of motion in rendered clips and sequences. Selecting this option often renders moving assets more sharply.

At maximum quality, rendering takes more time, and uses more RAM than at the default normal quality. Select this option only on systems with sufficient RAM. The Maximum Render Quality option is not recommended for systems with the minimum required RAM.

Maximum Render Quality often makes highly compressed image formats, or those containing compression artifacts, look worse because of sharpening.

Note:

For best results with Maximum Render Quality, select Memory from the Optimize Rendering For menu in preferences. For more information, see Optimize rendering for available memory.

... but I totally get one needs to prepare and test... I don't seek clarification to just get the answer I need to use but just fill things out... it can help test more smartly at the very least.

Anyway, thanks for the wisdom once again!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 27, 2017 Jul 27, 2017

Copy link to clipboard

Copied

That bit you quoted about MRQ ... that often with highly-compressed media it makes things worse? Yup.

The majority of my media starts from either the GH3, which I've typically shot in "standard" mp4 or mov (both highly-compressed long-GOP) as the stuff has more noise when using the All-Intra camera setting; and these days, for some of my projects, from my Samsung S7 ... ​phone!​ You know ​that's​ highly-compressed ​long​-GOP.

For much of the 'standard' mov/mp4 from the GH3, if the project needed best quality, I've used either MediaEncoder to watch-folder create transcodes, or Prelude to transcode editing media, typically in Cineform, although occasionally DNxHD/R. The phone media, I always convert to CFR via Handbrake, and for some projects, have then taken that 4k/8-bit media and within ME have made 1080 4:2:2 10-bit ... which surprised me as to how far I can push that media then in Lumetri or Resolve without inducing artifacts.

For the 1080 projects I've worked with straight-from-the cam media from the GH3, MRQ and MBD do not seem to help, but MRQ often induces artifacts, especially edge jaggies and halos.

Working with 4:2:2 media in a good intraframe codec (Cineform, DNxHD/R, mid-upper ProRes ... ) the two options don't seem to hurt, can add to render time somewhat (though not always) depending on the machine involved and I suppose effects used and such options ... and for some clips and or effects, may help some. I know some editors just use them period. I've talked with quite a few about this at NAB, and for those that use them, others just shake their heads and say they can't see enough difference when they do work to be worth the time, and often ... they can cause a problem. So they just leave them off.

Red Giant's video de-noising plugin is doing such an improved job over a version or two back, that I'm starting to shoot more of the All-Intra from the GH3, and just allowing time for de-noising that media. But still, haven't noted that in the 8-bit it produces any advantage to either MBD or MRQ. I haven't tested it for projects needing to go down to 1280x720 or SD yet, though. Especially going to SD for DVD purposes, it ​might​ make some difference. I think I can get away with just BluRay finally, however ... so ... well, I'll still test it. Moving diagonal sharply focused lines are marvy for testing this.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 28, 2017 Jul 28, 2017

Copy link to clipboard

Copied

https://forums.adobe.com/people/R+Neil+Haugen  wrote

That bit you quoted about MRQ ... that often with highly-compressed media it makes things worse? Yup.

Good to know!

https://forums.adobe.com/people/R+Neil+Haugen  wrote

The majority of my media starts from either the GH3, which I've typically shot in "standard" mp4 or mov (both highly-compressed long-GOP) as the stuff has more noise when using the All-Intra camera setting; and these days, for some of my projects, from my Samsung S7 ... phone! You know that's highly-compressed long-GOP.

For much of the 'standard' mov/mp4 from the GH3, if the project needed best quality, I've used either MediaEncoder to watch-folder create transcodes, or Prelude to transcode editing media, typically in Cineform, although occasionally DNxHD/R. The phone media, I always convert to CFR via Handbrake, and for some projects, have then taken that 4k/8-bit media and within ME have made 1080 4:2:2 10-bit ... which surprised me as to how far I can push that media then in Lumetri or Resolve without inducing artifacts.

Interesting that "All-Intra" had more noise! Correct me if wrong, but this generally means the in-camera processor/compressor was giving you something ready-made that, for you to solve it with All-Intra back at HQ would have meant transcoding All-Intra to something achieving the same results as GH3 in-camera processing... and in the end you found transcoding the in-camera processed footage (with less retained data) to be viable aesthetically and otherwise.

CFR via Handbrake ... have not heard of doing that with S7 media... I have an S7, have not used handbrake. It seems whatever you transcode to, in that case 1080 4:2:2 10-bit, actually makes use of Lumetri more flexible than on orig footage... so the Lumetri's effectiveness/flexibility seems affected by the source footage for the clip within Premiere, that Premiere doesn't translate during export enough to make that transcoding unnecessary? I was thinking Premiere might have tranlated internally first enough to avoid that... like with MBD chosen... but apparently you see S7->CFR->1080-4:2:2-10bit-ThenUseLumetri a different/better experience. (?) It's interesting because trancoding isn't adding quality (via data) per se... but I guess the space things end up in, perhaps with how the data is worked into that space, allows for better use Lumetri or some such.

https://forums.adobe.com/people/R+Neil+Haugen  wrote

Red Giant's video de-noising plugin is doing such an improved job over a version or two back, that I'm starting to shoot more of the All-Intra from the GH3, and just allowing time for de-noising that media. But still, haven't noted that in the 8-bit it produces any advantage to either MBD or MRQ. I haven't tested it for projects needing to go down to 1280x720 or SD yet, though. Especially going to SD for DVD purposes, it might make some difference. I think I can get away with just BluRay finally, however ... so ... well, I'll still test it. Moving diagonal sharply focused lines are marvy for testing this.

Neil

"Red Giant's video de-noising plugin" ... you have an advanced workflow using quite a bit outside of the Adobe suite... I'm currently largely within the boundaries of the suite... still good to hear about these small nuances... even if I don't act to try some of them out right away, they are in recesses of grey matter so can bubble up at convenient times... ... thank you.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 28, 2017 Jul 28, 2017

Copy link to clipboard

Copied

The GH2 with 'hacks' was rather legendary. And the GH3 was coming out with All-Intra, a doubling of the data-rate to file. As quite a number of GH3 users were testing the heck out of it, it was rather ... stunningly ... disappointing that the new built-in feature to give double the data was ... not so good. What apparently happens is that the All-Intra processing of the media for writing in-cam does less pixel blending than the more compressed processing does ... therefore, more actual sensor noise remained.

By the time it processes the media farther to reduce it to half the size, well ... between processing & compression, there's a lot less written noise. Comparing clips shot with All-I and then de-noised compared to a "straight" output ... was a wash. And with some de-noising, you lost enough detail to be less than satisfactory. So at that point, the standard media took less data space on card so you could record more, took less space on the computer, and was as good or better for processing. Probably better exports after post-processing. So, nuts to using the All-I mode.

Now, a couple years later, it seems the Red Giant de-noising is very much improved, especially when you bring spatial in ... where it checks say from 5 frames either way ... to determine what's noise & what's detail. So for me, suddenly ... the All-I media doth give a lot 'thicker' file, and is some better when looked at 100% after post. Not much, but some.

As to using Handbrake with my S7 media, that's all about getting the video from the VFR that the S7 and all other phones shoot into CFR which is what PrPro needs for best work. It does a better job about this than AME. I shoot everything from the S7 in 4k, and mostly, it stays there when I process. But at times I will be putting out 1080, and for that, bringing the 4k CFR output from Handbrake into Adobe's Media Encoder app (that ships with PrPro) for a transcode can be very handy.

Why?

4k media, in 4:2:0 color, can be transcoded to a full 4:2:2 color when down-sized to 1920x1080. Trading resolution bits for color bits, and though some scoff ... it's quite a proven concept both theoretical and empirical. I can "push" things a lot farther and with bigger steps with the S7 t-coded to 1080 4:2:2 than I can with straight S7 4k media. But trying to get there in one step via Handbrake doesn't do it. There's no option to set 4:2:2. And it outputs only 4:2:0.

See the kind of ridiculously nerdy stuff you start to pick up in video post work?

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

4k media, in 4:2:0 color, can be transcoded to a full 4:2:2 color when down-sized to 1920x1080

IMO It does not give you the extra info on chroma.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

I thought it was nuts when I first heard about it. However, what I thought ... or think about it ... doesn't matter. What actually happens with proper processing, does.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

https://forums.adobe.com/people/Ann+Bens  wrote

4k media, in 4:2:0 color, can be transcoded to a full 4:2:2 color when down-sized to 1920x1080

IMO It does not give you the extra info on chroma.

https://forums.adobe.com/people/R+Neil+Haugen  wrote

I thought it was nuts when I first heard about it. However, what I thought ... or think about it ... doesn't matter. What actually happens with proper processing, does.

Neil

It's good to see the contrasting thoughts on this... thank you both!

I feel a good takeaway here for me is to always remember to have personal artistic tastes when evaluating the results... I've seen color rich output from a G7X point-and-shoot that had me wondering the value of my 70D's ALL-I with all its size... yet that wasn't enough for me to try 70D non-ALL-I to judge for myself,,, as if it were burnt into my thinking from the specs that ALL-I had more data and was always the best of all worlds... and therefore perhaps missing avenues that defy specs/hype.

I have to qualify I'm generally satisfied with what I've been able to do with ALL-I from the 70D and that my own color/exposure learning curve has definitely been a part of some of the sore points I've hit (learning to set color balance using ExpoDisc at the time of shooting to maximize the best in-camera capturing/processing, as one example, and other things like avoiding certain mixed lighting scenarios which ALL-I on the camera's 70D may not reflect the horribly difficult casts that may exist like bruised shadow looks on skin tones and all that... i.e., florescent/incandescent/LED mixes)...

It's good to observe your aesthetic opinions about the outcome of the various things being discussed here!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

Everyone's views, workflows, preferences, and taste vary. That's the actual diversity of being humans.

Glad to hear you're working on nailing the in-cam color & exposure. That is SO important with 8-bit and even most DSLR produced 10-bit media.  If you start as near 'neutral' as possible, you can neutralize a series of clips, and then take them somewhere together.

If your clips are well off from neutral from the in-cam recording, getting them back to neutral may be about as far as you can really modify the clips without inducing too many artifacts. It's very limiting.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Enthusiast ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

I take it you generally shoot neutral for 8bit/DSLR? If yes, does that mean GH3? What about the S7 do you use pro mode or just take what auto gives you?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Jul 29, 2017 Jul 29, 2017

Copy link to clipboard

Copied

LATEST

For the GH3, I shoot as neutral as possible, often using K values to set the WB in-cam, and in hard conditions, using that Expodisc. I also have an external monitor, and I'll put that in the "false color" mode so I can see exactly where the 0 and 100 spots are for exposure, I'll know exactly what's crushed and clipped.

For the S7, most of the time for 'general' outdoor and basic tungsten, I can leave it set 'auto' and it does ok ... but if it's not nailing it, I go Pro and set things.

From Kevin Monahan's suggestion, I got Filmic ... but I've not found it got any better clippage, and the UI just seems clunky.

Neil

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Jul 26, 2017 Jul 26, 2017

Copy link to clipboard

Copied

Like Neil says: make some test of your own and see if it makes any difference in the output files. Besides the extra long export times.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines