11 Replies Latest reply on Apr 22, 2013 1:00 AM by Fuzzy Barsik

    Encoding taking up to 35 hours, am I doing something wrong?

    Solarflairs1

      Hey guys,

       

      I have a 1.5hr hour project that I'd like out relatively soon, but it's been stuck encoding for the last seven hours (and estimates it will be another 30hr) and it's only at 20%!

       

      My computer:

      Processor

      1 x Intel® Core™ i7-2600K Processor (4x 3.40GHz/8MB L3 Cache)-

       

      Memory

      1 x 8 GB [4 GB X2] DDR3-1333 Memory Module-Corsair or Major Brand

       

      Video Card

      1 x NVIDIA GeForce GTX 560 Ti - 1GB - EVGA Superclocked - Core: 900MHz-Single Card

       

      Motherboard

      1 x ASUS P8H67-M PRO-

       

      Primary Hard Drive

      1 x 64 GB ADATA S596 Turbo SSD-Single Drive

       

      Data Hard Drive

      1 x 1 TB HARD DRIVE -- 32M Cache, 7200 RPM, 6.0Gb/s-Single Drive

       

       

      Is this to be expected with a project of this size? Please help!

        • 1. Re: Encoding taking up to 35 hours, am I doing something wrong?
          John T Smith Adobe Community Professional & MVP

          What are you editing (codec details) and how many and what kind of effects have you put on the video?

          • 2. Re: Encoding taking up to 35 hours, am I doing something wrong?
            Solarflairs1 Level 1

            WMV file, HD 1080p - 29.97 FPS,

            - render at maximum depth [x]

            - use maximum render quality [x]

            - use frame blending [x]

             

            The video is mainly raw footage with only two cross dissolves and two audio tracks.

            • 3. Re: Encoding taking up to 35 hours, am I doing something wrong?
              John T Smith Adobe Community Professional & MVP

              This previous discussion of WMV Edit Problems http://forums.adobe.com/thread/977586 may help (or, may not... I don't use WMV)

              • 4. Re: Encoding taking up to 35 hours, am I doing something wrong?
                Solarflairs1 Level 1

                I read that but I'm pretty new with Premiere, what would you suggest I export with, then? If it will cut off 15 or so hours, I'm all for it (I just need it to be able to upload to YouTube)

                Where do I check to see if my codec is H.264? From what I read that's apparently very processor intensive.

                • 5. Re: Encoding taking up to 35 hours, am I doing something wrong?
                  Harm Millaard Level 7

                  - render at maximum depth [x]  Turn this off

                  - use maximum render quality [x] Turn this off

                  - use frame blending [x] Turn this off

                   

                  Maximum bit depth is only useful if you start with 10 bit material and export using a third party card at 10 bits.

                  Frame blending is only useful if you change the frame rate, which appears not to be the case.

                  MRQ, maximum render quality can be useful for some effects, but with only two dissolves can best be forgotten. It increases your render time significantly.

                   

                  If you turn the options off, your render time will probably decrease to less than 3 hours. Also export using H.264 with the YouTube preset, not WMV.

                  • 6. Re: Encoding taking up to 35 hours, am I doing something wrong?
                    Solarflairs1 Level 1

                    Hey thanks a ton!
                    Learned a lot when I browsed through some of the settings, and now I know what they do as well!

                     

                    Edit: Still stuck at 0% for the first few minutes but I'll post with the total ETA in a moment.

                    • 7. Re: Encoding taking up to 35 hours, am I doing something wrong?
                      Solarflairs1 Level 1

                      3 hours and 30 minutes, you da man!

                      • 8. Re: Encoding taking up to 35 hours, am I doing something wrong?
                        Harm Millaard Level 7

                        Glad to have been of help.

                         

                        For the future and other readers as well, if you have hardware MPE turned on, it boils down to using MRQ in almost all cases, so if you turn the checkbox on, it will only increase your export times very much, but does not really improve the final quality. Now, if you are using software MPE, then turning on MRQ will improve the quality, to be almost equal to hardware MPE without MRQ, although the export/render times may increase by a factor of around 10.

                         

                        Turning on MRQ when using hardware MPE is almost useless.

                        Turning on MRQ when using software MPE is almost a necessity for master quality.

                         

                        It gets very technical to explain what goes on under the hood, but the two lines above are the essence.

                        • 9. Re: Encoding taking up to 35 hours, am I doing something wrong?
                          Fuzzy Barsik Level 4
                          Maximum bit depth is only useful if you start with 10 bit material and export using a third party card at 10 bits.

                          That's incorrect: see this The Video Road blogpost. The detailed explanation is given at the end of the article.

                          • 10. Re: Encoding taking up to 35 hours, am I doing something wrong?
                            Harm Millaard Level 7

                            Here are the details:

                             

                            1. A HD clip in a SD timeline with scale-to-framesize and accelerated 3-way effect
                             
                              CPU: Gaussian low-pass sampled with bilinear
                              CPU MRQ: Variable-radius bicubic
                              GPU: Lanczos 2 low-pass sampled with bicubic
                              GPU MRQ: Lanczos 2 low-pass sampled with bicubic
                             
                              In this case the MRQ setting will only affect CPU based renders.
                             
                              2. A HD clip in a SD timeline with scale-to-framesize and a non-accelerated twirl effect
                             
                              CPU: Gaussian low-pass sampled with bilinear
                              CPU MRQ: Variable-radius bicubic
                              GPU: Gaussian low-pass sampled with bilinear
                              GPU MRQ: Variable-radius bicubic
                             
                            Because scale-to-framesize happens before effects and there is a non-accelerated effect, here the scaling will be done in software during a GPU render. During a GPU render the MRQ setting will thus apply to any software rendering done as part of the GPU render, determined on a segment-by-segment basis.
                             
                            Software's variable radius bicubic and GPU's Lanczos2 + bicubic will be pretty comparable. For more detail on the scaling used see here:
                             
                              http://blogs.adobe.com/premierepro/2010/10/scaling-in-premiere-pro-cs5.html

                             

                            This is taken from the horse's mouth, Steve Hoeg.

                             

                            You are also missing the fact that in the article you linked to, 32 bit means 8 bit per color channel. It is only when your raw material is 10 bits that you may - in certain circumstances-  end up with 10 bit per channel. For the OP that does not apply. If you expand 32 bit to 40 bit, all you are doing is adding some zeroes to the signal. It does not improve the signal that was not there in the first place, it only makes it more stable in later processing (chroma keying for instance). You cannot add something that was not there to start out with. It seems to me you have misread the info in the article you linked to.

                             

                            You can't start with apple-juice, expand it and expect to end up with apples.

                            • 11. Re: Encoding taking up to 35 hours, am I doing something wrong?
                              Fuzzy Barsik Level 4
                              You are also missing the fact that in the article you linked to, 32 bit means 8 bit per color channel. It is only when your raw material is 10 bits that you may - in certain circumstances-  end up with 10 bit per channel. For the OP that does not apply. If you expand 32 bit to 40 bit, all you are doing is adding some zeroes to the signal. It does not improve the signal that was not there in the first place, it only makes it more stable in later processing (chroma keying for instance). You cannot add something that was not there to start out with. It seems to me you have misread the info in the article you linked to.

                              No Harm, I'm not missing anything. Neither me, nor Karl Soule in his blogpost was talking about improving quality of original source footages. At the end of the article Steve Hoeg presents the detailed explanation how PrPro processes colour precision (to which I specifically pointed in my comment):

                              1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write DV at 8-bit.

                              2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DV at 8-bit. The color corrector working on the 32-bit blurred frame will be higher quality then the previous example.

                              3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 8-bit DV file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will be still higher quality because the final output format supports greater precision.

                              4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag. We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit frame, apply the color corrector to the 8-bit frame to get another 8-bit frame, then write 10-bit DPX from 8-bit data.

                              5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag. We will import the 10-bit DPX file, apply the blur to get an 32-bit frame, apply the color corrector to the 32-bit frame to get another 32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.

                              6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.

                              7. A title with a gradient and a blur on a 10-bit monitor (with hardware acceleration enabled.) This will render the blur in 32-bit, then display at 10-bit. The gradient should be smooth.

                               

                              And I said no a single word about 'Maximum Render Quality', hence don't completely understand your point there...