- Create one panorama for each of the exposure values shot.
- Create an HDR image using each of the panoramas.
The White Point Preview has no effect on the actual 32bit/channel HDR data or how it appears in LR. How are you creating your panoramas?
Or you can use stitching software, like Autopano Giga - Home | Image-stitching software for Windows, Mac, Linux | Kolor that understands how to stitch HDR panoramas from layers of stacked bracketed images:
I had assumed the white point setting only effected the preview, but I see very large exposure differences for each 32bit HDR stack back in lightroom.
I've been using Hugin for stitching.
If I did a pano for each exposure bracket and then processed to HDR Pro I bet I would get visible seam lines where the EQR wraps around. Photoshop is not aware of how to process seamless EQR's without jumping through a lot of hoops. Care must be taken to not use certain functions after stitching or that happens. (i.e. clarity or other local contrast adjustments.) Also:
- sometimes the pano includes a hand held nadir shot and then I pick the best one rather than processing the nadir as HDR, so that would interfere with that work flow.
- editing in a nadir logo or title would also be complicated by stitching the exposure brackets separately first.
I've not gone with PTgui or Autopano since I've already spent a fair amount of $ on photo processing and haven't worked myself up to that yet.
Seems not too much to be asking Photoshop to be able to process 32bit HDR stacks identically.
You are right the WB adjustment does not change how it looks in LR. I did a test with one adjusted bright and one dark. They differed by a very small insignificant amount in LR. So I guess I don't know why my identically shot brackets come back with different apparent exposures from HDR Pro.
I'm not sure I understand what you're doing–Is this workflow you are using:
1-Create one panorama for each of the exposure values shot.
a. All -2 EV images in one Hugin stitched panorama.
b. All -1 EV images in one Hugin stitched panorama.
c. All 0 EV images in one Hugin stitched panorama.
2-Create one (1) HDR Pro 32bit/channel setting image using each of the Hugin created panoramas a., b., c., etc.
3-Process the one (1) 32bit/channel HDR Pro TIFF inside LR.
If so please explain what you mean by, "I don't know why my identically shot brackets come back with different apparent exposures from HDR Pro."
>I've been using Hugin for stitching.
If you use hugin, there is no point in creating HDR images first. hugin can do that itself much better than Photoshop. Just output all your raws as 16-bit tiffs and stitch those in hugin. Also, you are best off using the "exposure fused from any arrangement" output option. This stitches panoramas at each exposure first and then blends them using enfuse. If you don't want a blend but want a HDR output file that you can manipulate in Lightroom, choose "High Dynamic Range" as output and tiff as the format. Lightroom should read those just fine.
Oh and all the blending and HDR tools inside hugin are completely wraparound stitch aware.
> editing in a nadir logo or title would also be complicated by stitching the exposure brackets separately first.
I do this by taking a final completely worked up equirectangular output and converting them to a cubic panorama (i.e. 6 square tiles) using the "erect2cubic" and nona command line tools from the panotools perl script collection (some instructions here Vinay's Hacks: November 2010 On mac os X you can also install the scripts using MacPorts) and editing the bottom square. Then I convert them back to a equirectangular pano using cubic2erect. This works perfectly. An example here: Views - Google Maps.
| I'm not sure I understand what you're doing–Is this workflow you are using:...
That's what you suggested above and I see how that could work, but it is not what I have been doing.
Here's what I do:
- shoot bracketed stacks for each image of the pano with identical settings.
- correct CA on the raws in LR
- send each image stack from LR to "photoshop merge to HDR Pro"
- adjust white point preview, which I guess does nothing in LR
- fix flaws in each HDR image in Photoshop. i.e. flares, ghosts, etc.
- add logo to nadir shot in photoshop.
- In Lightroom adjust picture parameters to optimize the main view image.
- copy and paste to the other images.
- export to 16bit tiffs. (Hugin doesn't seem to accept the HDR 32bit tiffs from Photoshop)
- Do other work on individual images as needed, such as replacing a drab sky with OnOne Perfect Mask.
- stitch the panorama in Hugin.
- careful final adjustments of the EQR in LR and PS as needed.
| If so please explain what you mean by, "I don't know why my identically shot brackets come back with different apparent exposures from HDR Pro."
It's after the merge to HDR Pro and back in Lightroom that I find exposures of the images do not match. Not stitched into a panorama yet.
I'm aware Hugin has HDR capabilities and at least it treats EQR's intelligently. What I'm not is clear about is how to really use these features of Hugin. I also really like doing adjustments in Lightroom as much as I can. Thanks for the pointers to the Hugin instructions. I'll find some time to go through those and see if I can figure it out.
Also thanks for the cube face translation links. I searched for tools to do that recently but only found some old apparently unsupported stuff that didn't seem that straightforward to use.
I just wish LR and PS would add a feature to treat EQR's properly. (and also add reversible rotation tools for zenith/nadir editing.) Seems it should be easy to add that.
I'll tell you my workflow for panoramas such as this. Starting from bracketed stacks of images.
1. I select all images, go into Develop and turn on auto-sync. Turn on "Remove Chromatic Aberration". Do not turn on "Enable Profile Corrections". Set a custom white balance to make sure all images are WB'ed the same and optimize sharpening and noise reduction. Basically you want all images to be developed exactly the same and fairly conservative at that.
2. Export all images to 16-bit tiff in prophotoRGB or adobeRGB (depending on how colorful they are)
3. At this point one can add a logo to the nadir shot or do it at the end using the workflow I described above.
4. Load all images into hugin. Generate control points. Optimize positions - first just positions, then positions and view. Level the panorama in the GL preview and make sure the projection is set to equirectangular and the view to 360x180, then optimize view, barrel and positions and finally "everything without translation". During all these steps remove errant control points by going into the control point list and deleting the ones with the largest deviation, which are usually bad control points on clouds or other moving things. Also make sure the pano stays correctly leveled.
5. Mask the tripod out of the images that have it in them using the mask tools in hugin. Mask feet and stuff out of the nadir shot.
6. Do a photometric optimization with high dynamic range, fixed exposure.
7. Go to stitcher, set the optimum canvas size, uncheck "exposure corrected, low dynamic range", Check A. "exposure fused from any arrangement", B. "High dynamic range->tiff", or C. "blended layers of similar exposure, without exposure correction" depending on what you want to do.
If you did A. You will get a tone mapped image that you can work up further in Lightroom or Photoshop. That image will be tone mapped respecting the wraparound on all sides so it won't have weird seams in a spherical projection. If you did B. you get a 32-bit tiff that will work great in Lightroom and as long as you don't use highlights, shadows, clarity and such will remain good. If you did C. you can load the multiple panorama layers you get into a HDR Pro in Photoshop, and load that into Lightroom.
Thanks Jao, great input! I use Hugin and own a Canon 8-15mm FE, but haven't tired making 360x180 panos yet. What viewer converters do you use such as for the one posted on Google Maps?
You don't need converter software turns out. Google makes all components available but they are hidden and not explicitly documented anywhere. What I do is run my output jpeg through Contribute – Street View – Google Maps and download the resulting file with the appropriate metadata added. Discovered that a while ago - works really well. Then upload that file to google+ photos where it will automatically show up as a immersive spherical pano (google calls those photospheres). From http://maps.google.com/views you can then add the pano using the big + button.
Thanks again I'm going to give it a try.
Concerning the OP's original question I found this old post on the Photoshop Family Forum:
You can post a new feature request here under 'Share an idea' with more detailed information on the issue and it may get Adobe's attention.
Thanks, that's link's an excellent discourse of the problem I'm having.
If you do create a new request I suggest adding a link to this post for "further information." There's lots of good information should Adobe consider adding more capability to HDR Pro. Also put a link to your Photoshop Family forum post here for others with this issue to go add their +1 vote and comments.
now 2 years later Adobe still has not implemented the option to create an equirectangular panorama on the fly. since 2 days we can make 360 panoramas from bracketed shots (HDR) but no chance to get an equirectangular result. what professionals need is no fancy boundary warp - just an option to create a simple unaltered 360° equirectangular panorama to create virtual tours with. PTGui software does it for years already.