8, 16, 32 bit depth?
Candid Arts
Registered Users Posts: 1,685 Major grins
So I was speaking with a professional photographer and he was telling me that he uses 32 bit depth when he is editing photos. I commented on the fact that when you export to jpeg, it downconverts it to 8 anyways, so what's the point. He said when you're editing in 32 vs 16 vs 8, you have more info for editing and thus more info get's transferred when making the 8 bit depth file.
So, using LR2, how do I set my working window to 16 or 32 bit depth? Do I really need to do this? He has some amazing work, and this is how he says he edits (well...a small part of how he edits at least)...
Thank you for your help.
So, using LR2, how do I set my working window to 16 or 32 bit depth? Do I really need to do this? He has some amazing work, and this is how he says he edits (well...a small part of how he edits at least)...
Thank you for your help.
Candid Arts Photography | Portland Oregon | Fine Art
OneTwoFiftieth | Portland, Oregon | Modern Portraiture
My Equipment:
Bodies: Canon 50D, Canon EOS 1
Lenses: Canon 10-22mm f/3.5-4.5, Canon 24-105mm f/4L IS, Canon 50mm f/1.4, Canon 100mm f/2.8 Macro, Canon MP-E 65mm f/2.8
Lighting: Canon 580EXII, Canon 420 EX, 12" Reflector, Pocket Wizard Plus II (3), AB800 (3), Large Softbox
Stability: Manfrotto 190CXPRO3 Tripod, Manfrotto 488RC4 Ball Head, Manfrotto 679B Monopod
OneTwoFiftieth | Portland, Oregon | Modern Portraiture
My Equipment:
Bodies: Canon 50D, Canon EOS 1
Lenses: Canon 10-22mm f/3.5-4.5, Canon 24-105mm f/4L IS, Canon 50mm f/1.4, Canon 100mm f/2.8 Macro, Canon MP-E 65mm f/2.8
Lighting: Canon 580EXII, Canon 420 EX, 12" Reflector, Pocket Wizard Plus II (3), AB800 (3), Large Softbox
Stability: Manfrotto 190CXPRO3 Tripod, Manfrotto 488RC4 Ball Head, Manfrotto 679B Monopod
0
Comments
I don't use LR, so I can't tell you how to set it to 32 bit mode, but I'm sure someone else will jump in with the answer. Try it and see. I will be astounded if you can perceive any difference. If your friend is creating amazing images it's because he is a great photographer, not because of the bit-depth he uses in LR. Some day, it might matter, but for now 16 bits will do just fine.
One must convert their rendered/gamma corrected 1.8 or 2.2 gamma images to this gamma 1.00 32bpc mode, which can lead to problems with some 8bpc images and not even 16bpc images are always safe - underexposed and or noisy low key images can really suffer simply from the mode change, without any "edits" (the mode change is an edit involving a huge gamma curve). Shadow areas of some images may also suffer.
If one can render the original image in a linear gamma from the camera/raw processor - then one may be better off as the original raw data is linear...that being said, Adobe software only allows one to render into a 1.8 or 2.2 gamma space with no option to keep the file linear AFAIK (while other processors do not limit the output profile to one of four "blessed" profiles).
Stephen Marsh
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
After I read this, I went back and reread his email. Just as you said. Uses 16 bit for most everything, and 32 bit for his HDR's. So then having my LR set at 16 bit when I export to PS, how do I set it, or k ow if I already have it set, to 16 bit?
Do you use PS for all your photo editing? I put my time into LR as I was told most photographers use that on a day to day basis more than PS...
Thank you for your help.
Oh and P.S. I installed a $20k pair of speakers in a customers house a few months back, WELL worth the money. And I would spend $10/ft on Kimber Cable speaker wire if I weren't broke...they are better. (Yes I work in an Custom High End Audio/Video store... :-) )
OneTwoFiftieth | Portland, Oregon | Modern Portraiture
My Equipment:
Bodies: Canon 50D, Canon EOS 1
Lenses: Canon 10-22mm f/3.5-4.5, Canon 24-105mm f/4L IS, Canon 50mm f/1.4, Canon 100mm f/2.8 Macro, Canon MP-E 65mm f/2.8
Lighting: Canon 580EXII, Canon 420 EX, 12" Reflector, Pocket Wizard Plus II (3), AB800 (3), Large Softbox
Stability: Manfrotto 190CXPRO3 Tripod, Manfrotto 488RC4 Ball Head, Manfrotto 679B Monopod
When I do HDRs, I open the RAW files directly into Photoshop from Bridge, and PS produces a 32 bit combined image. I use the local contrast mode to convert to 16 bits because the tone curve option gives me better control of the results.
Just my luck to make a cheap joke about audiophiles to a pro.
Is this what PS is doing when you open RAW files directly into its HDR processor from Bridge? From the user's view, it seems to skip part of the ACR conversion, but I have no idea of what it's doing internally. For HDRs, I have never understood whether it is better to open the RAW files directly (which is what I do) or to process them first in ACR and combine the TIFFs.
When the image gets to Photoshop look in the Image/Mode menu. It will either say 8 or 16 bits/channel because Lightroom as far as I know won't edit in 32 and so can't export 32. 16 is Lightroom's native bit depth and there is no place to change it. The only place to change it in LR is when you export, you can leave it as 16 or downsample to 8.
Photoshop can do 32 because it has 32-bit editing features, and I'm not sure but I think they're only in the Extended version. Lightroom doesn't have any 32 bit features. 32 bits is a pointless mode to most people. The true HDR range is so wide, you can't show the entire range on a monitor nor can you print it. There are special viewing controls in the Photoshop HDR mode for trying to visualize a 32 bit range on an 8-12 bit monitor. By the time you see an HDR printed or on web it's been compressed down to 16, 12 or 8 bits for printers and monitors.
From what I have been able to figure out from trial and error, Photoshop 32bpc mode uses the ICC source profile as input and creates linear 32bpc working version of the input, 32 bit moves such as exposure are applied and then when one renders the image out of 32bpc mode - the image is converted to the working RGB profile (which may be a different colour profile than the input image profile). So if you start with ProPhoto 8/16 bpc images and then create a HDR, when you save out to a lower bit depth than 32 and your RGB working space is say Adobe RGB - the final file will be Adobe RGB and not ProPhoto. Aparently Photoshop has no way to remember what the input profile was, so it always uses the working RGB to render the completed HDR.
Clear as mud?
Stephen Marsh
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
Hmm...so it sounds like by using the RAW files as input, you would be working with unconverted linear data during the HDR processing, whereas two conversions would have occurred before the HDR phase by using TIFF. Or am I still not getting it?
Well the point is, there’s a lot out there besides JPEG and there is a lot of data being tossed that you’ll never get back and could use (Epson and Canon printers for example can send all the 16-bit data to the print driver).
This may help:
http://www.digitalphotopro.com/gear/imaging-tech/the-bit-depth-decision.html
As for the comment about Photoshop tools, well nearly all the user tools that affect the quality of color and tone can be applied on 16-bit (I prefer to call it high bit), data even if its 12 or 14 bits (why confuse users with each? Its more than 8-bits, that’s what counts). 32bit is used primarily for HDR work where, surprise again, the additional data is quite useful for working in this format.
LR and ACR do all work in high bit (even when you feed them 8-bit data, however, there’s no free lunch here really but no speed loss either). Good considering LR and ACR engines work with a very wide gamut color space where again, you want that added data precision or watch out for banding.
You can set LR for export in high bit in the Export Presets or the Edit in Photoshop preferences.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Sure, but you're missing the point, and by your logic 24-bit CD remasters shouldn't make any sense either, yet they are pretty much industry standard these days.
The point of having greater bit depth for editing is that each time you do something to the image, the result has to be quantized to the bit depth you're working in. For example, take this fairly ordinary sequence of processing for a new RAW file:
1. Decode RAW file and demosaic
2. Apply correction for lens distortion
3. Adjust exposure
4. White balance
5. Adjust saturation
6. Adjust tone curve
7. Rotate to get the horizon perfectly level
8. Crop
9. Save to disk in whatever format you like
Steps 1-7 all involve interpolating colors and quantizing the result to the working bit depth, and each step takes as its starting point the result of the previous step. So even in this fairly basic sequence, we have quantization errors accumulating through seven stages. The point of working with greater depth is that the quantization error stays down in the least-significant bits and will simply be discarded if you save to a format with fewer bits. Working with 16 bits is fine if you only intend to produce JPEGs, but if you keep working copies in a 16-bit format, then an editor that works in 24 bits or more is a good thing; and if you save to a 24-bit "true color" format, then by the same reasoning, you benefit from a 32-bit editor.
Got bored with digital and went back to film.
In Photoshop, no question. But at least in terms of Adobe Raw processing engines, this pretty much happens with one convolution in the best processing (not user) applied order from the individual metadata instructions for rendering. I think WB may be a separate (first) pass, I’ll have to check some emails from the engineers. I believe there’s no more than two processing steps prior to demosaicing.
Author "Color Management for Photographers"
http://www.digitaldog.net/
I believe 24-bit true color is achieved with 8 bits for each of the RGB channels and is fully encoded in an "8-bit" JPEG. I understand your point about quantization error, but I think we have a long way to go before problems become apparent from working in 16 bits. By that I mean that beyond a certain point, more bits are useless unless they can be propagated from capture to display to perception. I'm guessing your eyes are younger than mine, but I would be willing to bet you cannot distinguish any difference in the least significant bit of a 16 bit image, and maybe not even the the last two.
Yes, there are ways to optimize the number of operations performed. But your reference to "no more than two processing steps prior to demosaicing" puzzles me a little because the Raw converters whose internals I know anything about all demosaic as the first step after decoding a raw file, while you seem to be implying that Adobe demosaics last (else the number of steps PRIOR to demosaicing would not be nearly as interesting as the number of steps AFTER it). Is this correct? I suppose you could do it either way, though arbitrarily rescaling a Bayer matrix seems like an odd idea. The interpolation would result in a "Bayer matrix" that didn't actually have pure RGB colors in each position, though I suppose it doesn't really matter.
Got bored with digital and went back to film.
Yes but the issue with 24 bit images is, when do they go south and show the results of the data loss? We just don’t know. How many edits, to what device, with what initial and output color space gamut? If 99 times out of 100, 24 bit files are indistinguishable from high bit, what about that 1 that disappoints upon output (today or in 10 years with that future technology?) and what do you do considering the data is long gone?
The fact that so many capture (scanner and cameras) from the get go provide more than 24 bit color, as do our image editors, and storage space is cheap (good data isn’t), it is surprising anyone would consider a 24 bit master archive adequate.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Sorry, your order of course makes far more sense and yes, I believe that there may be as few as two actual steps, WB then subsequent processing of the render instructions in of course, high bit, linear wide gamut space.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Right, but when we talk about "16-bit" editors, I think we are also talking about the per-channel bits, not per-pixel. A "16-bit" editor actually has 48 bits per pixel (16 x 3).
I don't know how old you are; I'm 44.
I might not be able to see the difference between two RGB colors that differ only in the least-significant bit of each channel, but a gradient may seem more or less smooth depending on how much quantization error has accumulated in the editing process.
Got bored with digital and went back to film.
The attached diagram is only for ACR/ALR, other raw processors would use their own processing order and steps.
Stephen Marsh
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
I archive RAW files. That's the maximum information my camera produces, currently 14 bits/channel. I can always reprocess ten years from now with a 128 bit editor on a 1024-bit operating system. But you know what? There will still only be 14 bits per channel to work with. The gradients will be awesome, though.
I suspect most if not all of us archive our Raws.
I sure hope so, as I have 10 year old Raw files I can’t reprocess (but that’s another story). There are no guarantees.
Author "Color Management for Photographers"
http://www.digitaldog.net/
That is the whole point - who does get it? I don't know what happens, the process is a black box as far as I am concerned. The Photoshop HDR functions may work with rendered gamma corrected data - and they may also work with linear raw data. Or not. I don't know! Perhaps somebody that has the ear of Adobe tech guys knows the answer.
If a gun was held to my head and I was forced to guess an answer, I would take a punt and guess that the HDR fuction only works with rendered gamma corrected data. Remember that ACR can only output to one of the four gamma corrected ICC profiles that are hard wired - one can't select any available ICC profile on the system (that is a design flaw IMHO, I prefer freedom than simplicity).That would mean that if the input files were raw camera files - then behind the scene they would be rendered to a gamma corrected space using some sort of default, last used or other raw processor settings in order to present the HDR software with the input for processing. The HDR software would then change the files from gamma corrected to linear versions of the input ICC profile and do it's stuff.
Stephen Marsh
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
For a long time at my previous employer they kept an old computer system alive just to process the RAW/TIFF files from a 1995 Kodak DCS 460, 6 MPix hybrid SLR/digital back. The camera would shoot to 260 MB hard drive cards (PCMCIA) or could be tethered directly to a computer using custom Kodak capture software. (There was a TWAIN driver that, as I recall, could not be used with much software after the year 2000.) The tether was via SCSI, and not one of the newer small SCSI cords but the old, bulky monster cords and connectors, and it couldn't be very long or you would get errors.
Since the RAW files used a TIFF wrapper I don't think that they "could" be opened in anything other than the Kodak software. The normal workflow was to do sample conversions until acceptable conversion settings were developed for a session, then the batch could be processed. It encouraged standard setups and written settings to incorporate quickly into the process. The output was to 16 bit "standard" RGB TIFFs, stored in an output directory.
Saving the original captures was probably not much use and we did just abandon those files at one point. Only the final presentation files were ultimately kept.
Moderator of the Cameras and Accessories forums
we know that in 32 bit monitor we have bits for alpha, z etc what will happen to them when seen on a 16 bit monitor?
The issue with high bit display systems is the full display path is still 8-bits in some areas. The application, OS and so forth all have to support more data. So for example, even if the OS and hardware are all there, we have to have Photoshop support the data which currently, depending on the OS, it may not provide.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Natural selection is responsible for every living thing that exists.
D3s, D500, D5300, and way more glass than the wife knows about.
It's not really that complicated, IMO.
Complex edits, color space conversions, etc. when performed on images represented in 8-bit, can lead to significant and visible degradation.
In the vast majority of situations, one can avoid those problems by shooting RAW and editing in 16-bit. I generally archive the almost-final version as a 16-bit TIFF or PSD. Then, I apply appropriate sharpening and save as JPEG.
Generally speaking, 32-bit is something of a dog. First, it doubles your memory usage. However, worse still, it makes editing extremely difficult because most editing tools don't work with 32-bit images. Even if you're using CS5 which does support 32-bit depth, you'll find that many/most of your tools and plugins are disabled when in 32-bit mode!