overexposure vs. lower ISO
I read somewhere that if you try exposure to the right (overexposure) when shooting high ISO and than correct the exposure doing RAW conversion, you will get less noise vs. shooting high ISO with the right exposure.
Say you overexposure 1 full stop when shooting ISO 800 and than correct the exposure during the RAW conversion, you will get much less noise than just shooting ISO 800 without exposure compensation.
My question is, isn’t it the same thing that overexposure 1 full stop when shooting ISO 800, as shooting ISO 400 without exposure compensation, in terms of shutter/aperture requirement? If so why not just shoot ISO 400?
Say you overexposure 1 full stop when shooting ISO 800 and than correct the exposure during the RAW conversion, you will get much less noise than just shooting ISO 800 without exposure compensation.
My question is, isn’t it the same thing that overexposure 1 full stop when shooting ISO 800, as shooting ISO 400 without exposure compensation, in terms of shutter/aperture requirement? If so why not just shoot ISO 400?
0
Comments
Proper exposure is still important to avoid blowing out the highlights.
In a scene with compressed exposure values, mostly middle tones with few pure whites and pure blacks, you can nudge the exposure settings up or down a bit and still have an "acceptable exposure". The exposure nudged up a bit will have less noise.
In a scene with expanded exposure values, white wedding gown and black tuxedo for instance, proper exposure is critical, even in RAW.
As far as the second part of your question, an overexposure at ISO 800 is not the same as a normal exposure at ISO 400. Try it in full manual mode and see the differences for yourself in both exposure values and resulting images.
ziggy53
Moderator of the Cameras and Accessories forums
If you shoot 400 at "correct" exposure, the tonal distribution is traditionally correct.
If you shoot 800 1 stop over, the tonal distribution is shifted to the higher bits. According to ETTR theory, that should be higher quality because higher bits are both denser and less noisy.
The question, which I don't know the answer to, is whether shifting to the higher bits reduces noise more than the increased noise you get from boosting your ISO. It probably depends on the camera you use, because some chipsets boost ISO with less noise than others. I'm guessing that shooting 1 stop over at 800 might work better with a Canon DIGIC II sensor than with a lower-quality sensor. For example, on my aging point-and-shoot, increasing the ISO results in so much noise I don't bother setting it above 100, so ETTR will not overcome the additional noise on that particular camera. But I'll bet it would work better on my XT.
Many of the proponents of Expose to the Right on the web take great pains to emphasize that ETTR should not be called overexposure because they feel the term overexposure has a negative connotation. In digital, they say, if you are using ETTR you can placing tones in an optimal position (similar to how the Zone System is about placing tones in an optimal position), but you aren't overexposed until you clip your high end. ETTR doesn't advocate clipping anything that shouldn't be clipped.
There is a great article here by M Reichman about a discussion he had with Thomas Knoll ( the original author of Photoshop - watch for his name whenever you load PSCS2 in the flash screen ) - http://www.luminous-landscape.com/tutorials/expose-right.shtm
But there is no discussion of one ISO versus another - indeed shooting "to the right" is essentially LOWERING the ISO you are using - kind of like shooting 400ASA TRI-X at 200 ASA...... you are pushing the hisotogram to the right, BUT NOT overexposing. You are, in effect, increasing exposure all across the tonalities, but most especially in the shadow areas where there is dramatically less data collected in the pixel wells.
This really works ONLY if you are shooting RAW in 16 bit mode and bringing the image into PSCS in a 16bit color space like ProPhoto or Adobe RGB. Shoot jpgs and it is asking for big troubles.
Moderator of the Technique Forum and Finishing School on Dgrin
Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
+ Exposure Compensation, of course, can help exposing "to the right"
I would just call it exposing properly in the first place myself.
It comes back to what is the proper exposure, and many incident meters are not terribly accurate, compared to an incident meter or even Sunny 16.
I know this because I have compared the meters reading on my cameras with the Sunny 16 settings and with a Sekonic Incident meter. Guess which is more acurate, and tends to push the histogram to the right?? Not the meter in my camera.
I suspect the meters in cameras are slightly biased to underexposure - think how well that worked with slide and negative film materials.
It keeps highlights from being blown with silicon sensors too. At the slight expence of the shadow details perhaps.
Moderator of the Technique Forum and Finishing School on Dgrin
So any means that shifts the lump counts, even tho we both agree that changing ISO isn't the best way to go about it.
Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
The way I understand it is more specific. Any means that shifts the lump at the time the image hits the sensor counts, though we're not exactly sure which ones count more.
That's just to distinguish from means that shift the lump in post-processing, which may actually hurt the image if it means you're scooping tones from the noisy, coarse shadow bits and shoving them up into the midtones where we can all see the noise.
Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
Is not sRGB also a 16-bit space? I've always believed it to be so and CS2 indicates it is (or at least I think it does).
Any clarification you can offer here would be greatly appreciated. I have a workflow and I like it, but I will change it if I'm not getting all the data I can from the camera. BTW - I shoot RAW and convert to sRGB in ACR - if that helps any.
Thanks
My Photos
Thoughts on photographing a wedding, How to post a picture, AF Microadjustments?, Light Scoop
Equipment List - Check my profile
Scott,
I think the point was that "JPGs" are not 16 bit. If you use 16 bit TIFFs in sRGB, you still have a pretty large Gamut, but not as large as some of the other algorithms.
If your intent is only digital printing or Internet capable files, sRGB is plenty good for images. (SmugMug only uses sRGB, for instance.)
If you have/use one of the high-end inkjet printers (CMYK) or do book publishing with traditional makeready, you might consider one of the larger Gamut color spaces.
Whichever way you go, if you stay in 18 bit for processing, you will retain most of the benefit, regardless of colorspace used. The difference between 16 bit and 8 bit for processing is huge, less so for delivery and presentation. (Your camera records in 12 bits, so preserving as much as possible, as long as possible, is to your benefit.)
Interestingly, if you save from RAW in Adobe RGB, process in Adobe RGB, and then save in sRGB for printing or Web, you can actually be worse off than if you originally saved sRGB from RAW. Excess conversion can work against you. Single conversion is usually best.
If you save the important RAW files, you can change your mind at any time, even in the future, even if more advanced color spaces emerge (unless that original RAW filetype becomes obsolete. That's the reason for an open RAW format specification.)
http://www.smugmug.com/help/srgb-versus-adobe-rgb-1998
http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm
http://www.cambridgeincolour.com/tutorials/color-spaces.htm
http://en.wikipedia.org/wiki/RGB_color_space
http://www.shootsmarter.com/monitorcentral.html
(Note that what we should all be using is "Extended ISO RGB" with unlimited Gamut. WooHoo )
ziggy53
Moderator of the Cameras and Accessories forums
Unlimited Gamut gets us into all sorts of other problems, doesn't it? Like impossible colors and colors that are even harder (if not impossible) to re-produced to printed medium, etc.
My Photos
Thoughts on photographing a wedding, How to post a picture, AF Microadjustments?, Light Scoop
Equipment List - Check my profile
Oh sure, burst my bubble. Next, you're going to tell me there's no Santa?
ziggy53 (Saddened by this recent dose of reality.:cry)
I'm better now!
Moderator of the Cameras and Accessories forums
I think Ziggy's answer is great. I prefer to avoid multiple conversions back and for between color spaces, but I do convert to Pro Photo on the way to PSCS from Raw, and frequently TAKE a pass though LAB on the way to an 8bit sRGB file as my final jpg.
I tend to not save tiffs as the storage is just so large and these are not commercial products for me. IF I planned on selling the images and making my living that way, I would save them either as 16bit tiffs or as uncompressed layered .psd files, but they would then be 50 - 200Mb files, I suspect, and I can't save just one or two files per CD-R.
I bring my images from RAW into ProPhoto RGB in PSCS2 - that is a very large 16 bit color space for my initial editing - setting black and white points and curves. Eventually the image will be saved as an 8bit sRGB tagged jpg to be loaded up to SMuggy.
I do it this way because my computer allows me to, without groaning too much, but if you decide to process very large images this way, it makes very large demands on your processing power and memory and may slow your computer a lot more than 8 bit files do. Smart SHarpening a large 16 bit file is much slower than an 8 bit sRGB file - a lot slower, almost instantaneous versus 30-60seconds on my desktop machine.
On a laptop, I tend to just use 8bit files as my laptop is not as fast as my desktop and I get impatient. I never said I was consistent or right.
Moderator of the Technique Forum and Finishing School on Dgrin
What does matter a lot for converting from raw with a lot of detail in the highlights is the "curve" (if you are using ACR). The default curve is an S which tends to lose detail in both highlights and shadows, in effect putting the steps of the ladder further apart at the top and bottom and closer together in the middle. The linear curve keeps the steps evenly spaced and is often what you want.
Similarly, beware of the "contrast" adjustment. If you intend to do your own post and use curves, it's often better to just set to 0.
Now I do what Pathfinder suggests and use 16 bit processing until the very end in most situations. But I do it because my computer is fast enough that it almost never matters and I consider it something of a superstition. In fact there are a few rare cases where even Dan accepts that 8 bit processing has limitations, but they are very rare. Still, it would be a pain to get bitten.
In any case, this isn't one of those cases. Try converting with the linear curve and 0 contrast adjustment. Then apply curves or whatever.
I do not believe unlimited gamut is actually possible (computers are finite, after all). But as per impossible colors, the Lab color space in Photoshop is more than capable of producing truly imaginary colors. And Adobe RGB, even sRGB, let alone Pro Photo, can represent colors that some printers and/or some papers cannot reproduce as well.
A former sports shooter
Follow me at: https://www.flickr.com/photos/bjurasz/
My Etsy store: https://www.etsy.com/shop/mercphoto?ref=hdr_shop_menu
The answer is that your eyes are not spectrometers. They don't see absolute colors. So when photographs are prepared for printing, a lot of black magic is applied. You gotta have the highest respect for the people who are good at this.
If you have any lurking belief that your eyes might be spectrometers, take a few minutes to ponder one of my all time favorite images:
Translated to colorspace definitions, this means a colorspace whose Gamut is defined by its usage, with no predetermined definition. Each image would have a definite Gamut, but it is defined when used, making the system infinite in variability and scope. This is called, "Unrendered RGB Color Space". At some point, this system would have to be converted into a "Rendered RGB Color Space", which could include Adobe RGB, sRGB, etc. RAW files are probably close in concept to the "Extended ISO RGB" specification.
ziggy53
Moderator of the Cameras and Accessories forums
John,
I use the Curve setting in the RAW Convertor in the Linear mode, (unless the original image was very flat ) - you taught me well, and I remember:
The two gray squares in the rutt's image above - one in the light and one in the shaded area , both read 107,107,107. They are not just close, but EXACTLY the same shade of gray!!
Eyes are not spectrophotometers!!
Moderator of the Technique Forum and Finishing School on Dgrin
Canon 40D
Canon EF-S 17-85 IS
http://www.flickr.com/trevaftw
Because the human visual system is highly adaptable to context. This is a feature, but it means you can't rely on your eyes for objective reading of color or tone. It's why every single software+visual method of calibrating your monitor is markedly inferior to actually buying a hardware calibrator.
My Photos
Thoughts on photographing a wedding, How to post a picture, AF Microadjustments?, Light Scoop
Equipment List - Check my profile
Anyway, this is a deep issue and it's gotten a lot of attention from some very smart people. In particular, Addison, the MIT professor who created the checkerboard illusion.
I see this as the same issue as targeting sRGB for Web graphics. Do we target sRGB because everybody out there uses sRGB devices? No. Almost no one (statistically speaking) has a properly calibrated monitor (outside of very aware creative pros) or a true native sRGB device. We target sRGB because it's the most reasonable target available. Or put another way, we don't do it because perfection is achievable, but because it's the way we can realistically minimize the degree of inherent imperfection.
Same with calibrating the monitor despite what our eyes will do. We don't calibrate because it will fix our eyes. We calibrate because at least we will have one leg to stand on, whereas without a calibrated monitor we have no legs to stand on. Again, we do it because it's the way we can realistically minimize the degree of inherent imperfection.
I think it strongly depends on your camera and how much data is being stored.
The problem of overexposure is clipping. This means the sensor has a hard maximum. You can imagine this like a bucket of water (charge=water, ccd=bucket). Once the bucket is full, water spills over.
However most cameras have a comfortable safety margin. My 20D can do close to +1EV that I can still bring back in the RAW converter. If I go higher, the RAW converter "invents" more data in the blue channel.
In order to understand why we get a little less noise, lets talk about how different ISO levels are generated:
The data that is read out by the CCD is amplified and then sent through a (in most cases 12-bit) Analog to Digital Converter (ADC). If you record less light, you simply increase the amplification factor (Programmable Gain Amplifier - PGA). Unfortunately the CCD also creates noise and so does the PGA. If you increase amplification, more noise will show up at the output (if the noise floor is constant, amplification by 2 means 2 times more noise).
So in effect if we don't have enough light, but we still want to use the full dynamic range of the ADC, we need to amplify -> more noise
Noise is also a function of time. The longer you wait, the more noise accumulates (basically the CCD is an integrator).
So it makes no sense to overexpose at ISO200 and hope to beat the results of ISO100, but because my 20D can overexpose by about 1 stop without loosing data, I can overexpose the image at ISO100 and then reduce exposure (basically reducing the gain) and essentially have the same result as if my camera had a ISO50 setting (which it doesnt). I never do it though, I consider the overexposure capabilities my safety margin for corrections.
By the same account I can underexpose a ISO3200 shot (if there is not enough light, but I need to shoot handheld). When I then amplify the signal digitally, I pretty much have a ISO6400 shot (unusable for most applications).
I did that exactly once:
http://www.aguntherphotography.com/galleries/south_america/peru/cusco/xmas.html
but not to the full extend of 1 stop (maybe 1/3). I had no tripod and it was night. I think it came out o.k.
Anyways, this is how I understand the process. I could be wrong.
California Photo Scout
Travel Guides
The thought about pushing then pulling does work and it's a good rule of thumb but that's all it is. Like any other rule-of-thumb you can't expect it to apply to all cases.
Yes, it's true that more usable bits are available in the highlights. Yes, it's true this is not only because of the linear response of the sensor but also because noise fills in the low bits first (there's less SNR).
However, you have to weigh that against a couple of factors:
1. Longer shutter speeds can reduce your chance of getting a sharp picture. Sometimes a sharp noisy picture is a lot better than a blurry "clean" picture.
2. Scenes with wide dynamic ranges will be harder to capture without blowing the highlights. Again, a noisy image with good tonal distribution may be tons better than a "clean" image with clipped highlights.
So, consider the subject and the final product when making the shot. Bracket exposures if you can and shoot RAW to maximize flexibility in post.
Erich