overexposure vs. lower ISO

RedSoxRedSox Registered Users Posts: 92 Big grins
edited November 9, 2006 in Cameras
I read somewhere that if you try exposure to the right (overexposure) when shooting high ISO and than correct the exposure doing RAW conversion, you will get less noise vs. shooting high ISO with the right exposure.

Say you overexposure 1 full stop when shooting ISO 800 and than correct the exposure during the RAW conversion, you will get much less noise than just shooting ISO 800 without exposure compensation.

My question is, isn’t it the same thing that overexposure 1 full stop when shooting ISO 800, as shooting ISO 400 without exposure compensation, in terms of shutter/aperture requirement? If so why not just shoot ISO 400?

Comments

  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited November 6, 2006
    I'm not sure that "expose to the right" means over-exposure, it just means "avoid under-exposure" to reduce noise, and it certainly works well for that purpose.

    Proper exposure is still important to avoid blowing out the highlights.

    In a scene with compressed exposure values, mostly middle tones with few pure whites and pure blacks, you can nudge the exposure settings up or down a bit and still have an "acceptable exposure". The exposure nudged up a bit will have less noise.

    In a scene with expanded exposure values, white wedding gown and black tuxedo for instance, proper exposure is critical, even in RAW.

    As far as the second part of your question, an overexposure at ISO 800 is not the same as a normal exposure at ISO 400. Try it in full manual mode and see the differences for yourself in both exposure values and resulting images.

    ziggy53
    ziggy53
    Moderator of the Cameras and Accessories forums
  • colourboxcolourbox Registered Users Posts: 2,095 Major grins
    edited November 6, 2006
    This is theoretical because I haven't had the time to test it, but...

    If you shoot 400 at "correct" exposure, the tonal distribution is traditionally correct.

    If you shoot 800 1 stop over, the tonal distribution is shifted to the higher bits. According to ETTR theory, that should be higher quality because higher bits are both denser and less noisy.

    The question, which I don't know the answer to, is whether shifting to the higher bits reduces noise more than the increased noise you get from boosting your ISO. It probably depends on the camera you use, because some chipsets boost ISO with less noise than others. I'm guessing that shooting 1 stop over at 800 might work better with a Canon DIGIC II sensor than with a lower-quality sensor. For example, on my aging point-and-shoot, increasing the ISO results in so much noise I don't bother setting it above 100, so ETTR will not overcome the additional noise on that particular camera. But I'll bet it would work better on my XT.

    Many of the proponents of Expose to the Right on the web take great pains to emphasize that ETTR should not be called overexposure because they feel the term overexposure has a negative connotation. In digital, they say, if you are using ETTR you can placing tones in an optimal position (similar to how the Zone System is about placing tones in an optimal position), but you aren't overexposed until you clip your high end. ETTR doesn't advocate clipping anything that shouldn't be clipped.
  • pathfinderpathfinder Super Moderators Posts: 14,703 moderator
    edited November 6, 2006
    Keeping the hisotogram "to the right" does save data bits in the shadow tones, since they have been moved "to the right " also, but not necessarily better than a proper exposure at a lower ISO would have done.

    There is a great article here by M Reichman about a discussion he had with Thomas Knoll ( the original author of Photoshop - watch for his name whenever you load PSCS2 in the flash screen ) - http://www.luminous-landscape.com/tutorials/expose-right.shtm

    But there is no discussion of one ISO versus another - indeed shooting "to the right" is essentially LOWERING the ISO you are using - kind of like shooting 400ASA TRI-X at 200 ASA...... you are pushing the hisotogram to the right, BUT NOT overexposing. You are, in effect, increasing exposure all across the tonalities, but most especially in the shadow areas where there is dramatically less data collected in the pixel wells.

    This really works ONLY if you are shooting RAW in 16 bit mode and bringing the image into PSCS in a 16bit color space like ProPhoto or Adobe RGB. Shoot jpgs and it is asking for big troubles.
    Pathfinder - www.pathfinder.smugmug.com

    Moderator of the Technique Forum and Finishing School on Dgrin
  • wxwaxwxwax Registered Users Posts: 15,471 Major grins
    edited November 6, 2006
    Just to note that changing ISO isn't the only way to "expose to the right" of the histogram. I prefer changing shutter speeds or aperture, since we're only talking one or two stops.
    Sid.
    Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
    http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
  • pathfinderpathfinder Super Moderators Posts: 14,703 moderator
    edited November 6, 2006
    Actually Sid, changing ISO has nothing to do with exposing "to the right" And I do know that you know this too!!

    + Exposure Compensation, of course, can help exposing "to the right"

    I would just call it exposing properly in the first place myself.

    It comes back to what is the proper exposure, and many incident meters are not terribly accurate, compared to an incident meter or even Sunny 16.

    I know this because I have compared the meters reading on my cameras with the Sunny 16 settings and with a Sekonic Incident meter. Guess which is more acurate, and tends to push the histogram to the right?? Not the meter in my camera.

    I suspect the meters in cameras are slightly biased to underexposure - think how well that worked with slide and negative film materials.

    It keeps highlights from being blown with silicon sensors too. At the slight expence of the shadow details perhaps.
    Pathfinder - www.pathfinder.smugmug.com

    Moderator of the Technique Forum and Finishing School on Dgrin
  • CameronCameron Registered Users Posts: 745 Major grins
    edited November 6, 2006
    There's a great article at Luminous Landscape that talks about why exposing to the right gives you more image data to work with. As long as you're not blowing your highlights, you'll theoretically get a better image - especially with regards to shadow detail.
  • wxwaxwxwax Registered Users Posts: 15,471 Major grins
    edited November 6, 2006
    Correct me if I'm wrong, PF, but exposing to the right means shifting the lump in the histogram to the right. It's a description of the ends, not the means, right?

    So any means that shifts the lump counts, even tho we both agree that changing ISO isn't the best way to go about it.
    Sid.
    Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
    http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
  • colourboxcolourbox Registered Users Posts: 2,095 Major grins
    edited November 6, 2006
    wxwax wrote:
    Correct me if I'm wrong, PF, but exposing to the right means shifting the lump in the histogram to the right. It's a description of the ends, not the means, right?

    So any means that shifts the lump counts, even tho we both agree that changing ISO isn't the best way to go about it.

    The way I understand it is more specific. Any means that shifts the lump at the time the image hits the sensor counts, though we're not exactly sure which ones count more.

    That's just to distinguish from means that shift the lump in post-processing, which may actually hurt the image if it means you're scooping tones from the noisy, coarse shadow bits and shoving them up into the midtones where we can all see the noise.
  • wxwaxwxwax Registered Users Posts: 15,471 Major grins
    edited November 7, 2006
    colourbox wrote:
    The way I understand it is more specific. Any means that shifts the lump at the time the image hits the sensor counts, though we're not exactly sure which ones count more.

    That's just to distinguish from means that shift the lump in post-processing, which may actually hurt the image if it means you're scooping tones from the noisy, coarse shadow bits and shoving them up into the midtones where we can all see the noise.
    Right, agreed, in camera. nod.gif
    Sid.
    Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
    http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
  • Scott_QuierScott_Quier Registered Users Posts: 6,524 Major grins
    edited November 7, 2006
    pathfinder wrote:
    This really works ONLY if you are shooting RAW in 16 bit mode and bringing the image into PSCS in a 16bit color space like ProPhoto or Adobe RGB. Shoot jpgs and it is asking for big troubles.
    Pathfinder,

    Is not sRGB also a 16-bit space? I've always believed it to be so and CS2 indicates it is (or at least I think it does).

    Any clarification you can offer here would be greatly appreciated. I have a workflow and I like it, but I will change it if I'm not getting all the data I can from the camera. BTW - I shoot RAW and convert to sRGB in ACR - if that helps any.

    Thanks
  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited November 7, 2006
    Pathfinder,

    Is not sRGB also a 16-bit space? I've always believed it to be so and CS2 indicates it is (or at least I think it does).

    Any clarification you can offer here would be greatly appreciated. I have a workflow and I like it, but I will change it if I'm not getting all the data I can from the camera. BTW - I shoot RAW and convert to sRGB in ACR - if that helps any.

    Thanks

    Scott,

    I think the point was that "JPGs" are not 16 bit. If you use 16 bit TIFFs in sRGB, you still have a pretty large Gamut, but not as large as some of the other algorithms.

    If your intent is only digital printing or Internet capable files, sRGB is plenty good for images. (SmugMug only uses sRGB, for instance.)

    If you have/use one of the high-end inkjet printers (CMYK) or do book publishing with traditional makeready, you might consider one of the larger Gamut color spaces.

    Whichever way you go, if you stay in 18 bit for processing, you will retain most of the benefit, regardless of colorspace used. The difference between 16 bit and 8 bit for processing is huge, less so for delivery and presentation. (Your camera records in 12 bits, so preserving as much as possible, as long as possible, is to your benefit.)

    Interestingly, if you save from RAW in Adobe RGB, process in Adobe RGB, and then save in sRGB for printing or Web, you can actually be worse off than if you originally saved sRGB from RAW. Excess conversion can work against you. Single conversion is usually best.

    If you save the important RAW files, you can change your mind at any time, even in the future, even if more advanced color spaces emerge (unless that original RAW filetype becomes obsolete. That's the reason for an open RAW format specification.) clap.gif

    http://www.smugmug.com/help/srgb-versus-adobe-rgb-1998
    http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm
    http://www.cambridgeincolour.com/tutorials/color-spaces.htm
    http://en.wikipedia.org/wiki/RGB_color_space
    http://www.shootsmarter.com/monitorcentral.html

    (Note that what we should all be using is "Extended ISO RGB" with unlimited Gamut. WooHoo thumb.gif)


    ziggy53
    ziggy53
    Moderator of the Cameras and Accessories forums
  • Scott_QuierScott_Quier Registered Users Posts: 6,524 Major grins
    edited November 7, 2006
    ziggy53 wrote:
    Scott,

    I think the point was that "JPGs" are not 16 bit. If you use 16 bit TIFFs in sRGB, you still have a pretty large Gamut, but not as large as some of the other algorithms.

    If your intent is only digital printing or Internet capable files, sRGB is plenty good for images. (SmugMug only uses sRGB, for instance.)

    If you have/use one of the high-end inkjet printers (CMYK) or do book publishing with traditional makeready, you might consider one of the larger Gamut color spaces.

    Whichever way you go, if you stay in 18 bit for processing, you will retain most of the benefit, regardless of colorspace used. The difference between 16 bit and 8 bit for processing is huge, less so for delivery and presentation. (Your camera records in 12 bits, so preserving as much as possible, as long as possible, is to your benefit.)

    Interestingly, if you save from RAW in Adobe RGB, process in Adobe RGB, and then save in sRGB for printing or Web, you can actually be worse off than if you originally saved sRGB from RAW. Excess conversion can work against you. Single conversion is usually best.

    If you save the important RAW files, you can change your mind at any time, even in the future, even if more advanced color spaces emerge (unless that original RAW filetype becomes obsolete. That's the reason for an open RAW format specification.) clap.gif

    http://www.smugmug.com/help/srgb-versus-adobe-rgb-1998
    http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm
    http://www.cambridgeincolour.com/tutorials/color-spaces.htm
    http://en.wikipedia.org/wiki/RGB_color_space
    http://www.shootsmarter.com/monitorcentral.html

    (Note that what we should all be using is "Extended ISO RGB" with unlimited Gamut. WooHoo thumb.gif)


    ziggy53
    This confirms everything I have read here (and elsewhere). I know from my 9-5 that excess conversion is not good. In many instances, it is impossbile to get back to the starting point if you convert from one format to another and then back.

    Unlimited Gamut gets us into all sorts of other problems, doesn't it? Like impossible colors and colors that are even harder (if not impossible) to re-produced to printed medium, etc.
  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited November 7, 2006
    ...

    Unlimited Gamut gets us into all sorts of other problems, doesn't it? Like impossible colors and colors that are even harder (if not impossible) to re-produced to printed medium, etc.

    Oh sure, burst my bubble. Next, you're going to tell me there's no Santa?headscratch.gif

    ziggy53 (Saddened by this recent dose of reality.:cry)

    I'm better now! :D
    ziggy53
    Moderator of the Cameras and Accessories forums
  • pathfinderpathfinder Super Moderators Posts: 14,703 moderator
    edited November 7, 2006
    Scott,

    I think Ziggy's answer is great. I prefer to avoid multiple conversions back and for between color spaces, but I do convert to Pro Photo on the way to PSCS from Raw, and frequently TAKE a pass though LAB on the way to an 8bit sRGB file as my final jpg.

    I tend to not save tiffs as the storage is just so large and these are not commercial products for me. IF I planned on selling the images and making my living that way, I would save them either as 16bit tiffs or as uncompressed layered .psd files, but they would then be 50 - 200Mb files, I suspect, and I can't save just one or two files per CD-R.

    I bring my images from RAW into ProPhoto RGB in PSCS2 - that is a very large 16 bit color space for my initial editing - setting black and white points and curves. Eventually the image will be saved as an 8bit sRGB tagged jpg to be loaded up to SMuggy.

    I do it this way because my computer allows me to, without groaning too much, but if you decide to process very large images this way, it makes very large demands on your processing power and memory and may slow your computer a lot more than 8 bit files do. Smart SHarpening a large 16 bit file is much slower than an 8 bit sRGB file - a lot slower, almost instantaneous versus 30-60seconds on my desktop machine.

    On a laptop, I tend to just use 8bit files as my laptop is not as fast as my desktop and I get impatient. I never said I was consistent or right.
    Pathfinder - www.pathfinder.smugmug.com

    Moderator of the Technique Forum and Finishing School on Dgrin
  • ruttrutt Registered Users Posts: 6,511 Major grins
    edited November 7, 2006
    You guys are going to just love the "Resolution" chapter of Dan Margulis' Professional Photoshop, 5th Edition.. He's a real curmudgeon about the need for 16 bits, as you may already know. Whatever you think about this argument, you need to be careful because he is right about a lot of basics. First, although it is true that 8 bits gives you fewer possible levels, that doesn't imply that it can't represent a deeper shadow or a brighter highlight. In other words, the ladder has fewer steps, but it can still be just as high.

    What does matter a lot for converting from raw with a lot of detail in the highlights is the "curve" (if you are using ACR). The default curve is an S which tends to lose detail in both highlights and shadows, in effect putting the steps of the ladder further apart at the top and bottom and closer together in the middle. The linear curve keeps the steps evenly spaced and is often what you want.

    Similarly, beware of the "contrast" adjustment. If you intend to do your own post and use curves, it's often better to just set to 0.

    Now I do what Pathfinder suggests and use 16 bit processing until the very end in most situations. But I do it because my computer is fast enough that it almost never matters and I consider it something of a superstition. In fact there are a few rare cases where even Dan accepts that 8 bit processing has limitations, but they are very rare. Still, it would be a pain to get bitten.

    In any case, this isn't one of those cases. Try converting with the linear curve and 0 contrast adjustment. Then apply curves or whatever.
    If not now, when?
  • mercphotomercphoto Registered Users Posts: 4,550 Major grins
    edited November 7, 2006
    This confirms everything I have read here (and elsewhere). I know from my 9-5 that excess conversion is not good. In many instances, it is impossbile to get back to the starting point if you convert from one format to another and then back.
    Correct. Excessive conversions are not good, and conversions are not exactly reversible either. It is not guaranteed, for example, that converting from color space A to B, and then immediately back to A again will give you back the same bits. (Assuming you really are doing a conversion that second time, and not an undo.)
    Unlimited Gamut gets us into all sorts of other problems, doesn't it? Like impossible colors and colors that are even harder (if not impossible) to re-produced to printed medium, etc.
    I do not believe unlimited gamut is actually possible (computers are finite, after all). But as per impossible colors, the Lab color space in Photoshop is more than capable of producing truly imaginary colors. And Adobe RGB, even sRGB, let alone Pro Photo, can represent colors that some printers and/or some papers cannot reproduce as well.
    Bill Jurasz - Mercury Photography - Cedar Park, TX
    A former sports shooter
    Follow me at: https://www.flickr.com/photos/bjurasz/
    My Etsy store: https://www.etsy.com/shop/mercphoto?ref=hdr_shop_menu
  • ruttrutt Registered Users Posts: 6,511 Major grins
    edited November 7, 2006
    I think the most amazing thing is that the gamut of art books is actually quite a bit narrower than the gamut of a decent monitor and also narrower than that of a good inkjet with expensive paper. Yet we have all seen really beautiful reproductions in such books, ones we'd be proud of coming from our own printers. What exactly is the magic behind this?

    The answer is that your eyes are not spectrometers. They don't see absolute colors. So when photographs are prepared for printing, a lot of black magic is applied. You gotta have the highest respect for the people who are good at this.

    If you have any lurking belief that your eyes might be spectrometers, take a few minutes to ponder one of my all time favorite images:

    2790910-M.jpg
    If not now, when?
  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited November 7, 2006
    mercphoto wrote:
    ...
    I do not believe unlimited gamut is actually possible (computers are finite, after all). But as per impossible colors, the Lab color space in Photoshop is more than capable of producing truly imaginary colors. And Adobe RGB, even sRGB, let alone Pro Photo, can represent colors that some printers and/or some papers cannot reproduce as well.
    If I understand it correctly, no guarantees, imagine a "ruler", with definite length but no scale, able to slide upon a scale of infinite length. The "ruler" only has scale when it comes to rest at some point on the infinite scale.

    Translated to colorspace definitions, this means a colorspace whose Gamut is defined by its usage, with no predetermined definition. Each image would have a definite Gamut, but it is defined when used, making the system infinite in variability and scope. This is called, "Unrendered RGB Color Space". At some point, this system would have to be converted into a "Rendered RGB Color Space", which could include Adobe RGB, sRGB, etc. RAW files are probably close in concept to the "Extended ISO RGB" specification.

    ziggy53
    ziggy53
    Moderator of the Cameras and Accessories forums
  • pathfinderpathfinder Super Moderators Posts: 14,703 moderator
    edited November 7, 2006
    rutt wrote:
    You guys are going to just love the "Resolution" chapter of Dan Margulis' Professional Photoshop, 5th Edition.. He's a real curmudgeon about the need for 16 bits, as you may already know. Whatever you think about this argument, you need to be careful because he is right about a lot of basics. First, although it is true that 8 bits gives you fewer possible levels, that doesn't imply that it can't represent a deeper shadow or a brighter highlight. In other words, the ladder has fewer steps, but it can still be just as high.

    What does matter a lot for converting from raw with a lot of detail in the highlights is the "curve" (if you are using ACR). The default curve is an S which tends to lose detail in both highlights and shadows, in effect putting the steps of the ladder further apart at the top and bottom and closer together in the middle. The linear curve keeps the steps evenly spaced and is often what you want.

    Similarly, beware of the "contrast" adjustment. If you intend to do your own post and use curves, it's often better to just set to 0.

    Now I do what Pathfinder suggests and use 16 bit processing until the very end in most situations. But I do it because my computer is fast enough that it almost never matters and I consider it something of a superstition. In fact there are a few rare cases where even Dan accepts that 8 bit processing has limitations, but they are very rare. Still, it would be a pain to get bitten.

    In any case, this isn't one of those cases. Try converting with the linear curve and 0 contrast adjustment. Then apply curves or whatever.

    John,

    I use the Curve setting in the RAW Convertor in the Linear mode, (unless the original image was very flat ) - you taught me well, and I remember:thumb.gif


    2790910-S.jpg
    The two gray squares in the rutt's image above - one in the light and one in the shaded area , both read 107,107,107. They are not just close, but EXACTLY the same shade of gray!!

    Eyes are not spectrophotometers!!deal.gif
    Pathfinder - www.pathfinder.smugmug.com

    Moderator of the Technique Forum and Finishing School on Dgrin
  • Duckys54Duckys54 Registered Users Posts: 273 Major grins
    edited November 8, 2006
    How can they be the same yet different?
    I am Trevor and I have upgraded:
    Canon 40D
    Canon EF-S 17-85 IS

    http://www.flickr.com/trevaftw
  • colourboxcolourbox Registered Users Posts: 2,095 Major grins
    edited November 8, 2006
    Duckys54 wrote:
    How can they be the same yet different?

    Because the human visual system is highly adaptable to context. This is a feature, but it means you can't rely on your eyes for objective reading of color or tone. It's why every single software+visual method of calibrating your monitor is markedly inferior to actually buying a hardware calibrator.
  • Scott_QuierScott_Quier Registered Users Posts: 6,524 Major grins
    edited November 8, 2006
    pathfinder wrote:
    John,

    I use the Curve setting in the RAW Convertor in the Linear mode, (unless the original image was very flat ) - you taught me well, and I remember:thumb.gif



    The two gray squares in the rutt's image above - one in the light and one in the shaded area , both read 107,107,107. They are not just close, but EXACTLY the same shade of gray!!

    Eyes are not spectrophotometers!!deal.gif
    I did the same thing - popped this into PS and came up with the same results. WOW!
  • ruttrutt Registered Users Posts: 6,511 Major grins
    edited November 8, 2006
    I take a very different conclusion from this. What good is a calibrated monitor when your eyes aren't spectrometers? What actually matters is how people perceive color in context. But this can change radically as an image is reproduced on different output devices.

    Anyway, this is a deep issue and it's gotten a lot of attention from some very smart people. In particular, Addison, the MIT professor who created the checkerboard illusion.
    colourbox wrote:
    Because the human visual system is highly adaptable to context. This is a feature, but it means you can't rely on your eyes for objective reading of color or tone. It's why every single software+visual method of calibrating your monitor is markedly inferior to actually buying a hardware calibrator.
    If not now, when?
  • colourboxcolourbox Registered Users Posts: 2,095 Major grins
    edited November 8, 2006
    rutt wrote:
    I take a very different conclusion from this. What good is a calibrated monitor when your eyes aren't spectrometers?

    I see this as the same issue as targeting sRGB for Web graphics. Do we target sRGB because everybody out there uses sRGB devices? No. Almost no one (statistically speaking) has a properly calibrated monitor (outside of very aware creative pros) or a true native sRGB device. We target sRGB because it's the most reasonable target available. Or put another way, we don't do it because perfection is achievable, but because it's the way we can realistically minimize the degree of inherent imperfection.

    Same with calibrating the monitor despite what our eyes will do. We don't calibrate because it will fix our eyes. We calibrate because at least we will have one leg to stand on, whereas without a calibrated monitor we have no legs to stand on. Again, we do it because it's the way we can realistically minimize the degree of inherent imperfection.
  • aguntheragunther Registered Users Posts: 242 Major grins
    edited November 8, 2006
    RedSox wrote:
    I read somewhere that if you try exposure to the right (overexposure) when shooting high ISO and than correct the exposure doing RAW conversion, you will get less noise vs. shooting high ISO with the right exposure.

    Say you overexposure 1 full stop when shooting ISO 800 and than correct the exposure during the RAW conversion, you will get much less noise than just shooting ISO 800 without exposure compensation.

    My question is, isn’t it the same thing that overexposure 1 full stop when shooting ISO 800, as shooting ISO 400 without exposure compensation, in terms of shutter/aperture requirement? If so why not just shoot ISO 400?
    Here is how I understand the process (correct me if I am wrong).
    I think it strongly depends on your camera and how much data is being stored.
    The problem of overexposure is clipping. This means the sensor has a hard maximum. You can imagine this like a bucket of water (charge=water, ccd=bucket). Once the bucket is full, water spills over.

    However most cameras have a comfortable safety margin. My 20D can do close to +1EV that I can still bring back in the RAW converter. If I go higher, the RAW converter "invents" more data in the blue channel.

    In order to understand why we get a little less noise, lets talk about how different ISO levels are generated:
    The data that is read out by the CCD is amplified and then sent through a (in most cases 12-bit) Analog to Digital Converter (ADC). If you record less light, you simply increase the amplification factor (Programmable Gain Amplifier - PGA). Unfortunately the CCD also creates noise and so does the PGA. If you increase amplification, more noise will show up at the output (if the noise floor is constant, amplification by 2 means 2 times more noise).

    So in effect if we don't have enough light, but we still want to use the full dynamic range of the ADC, we need to amplify -> more noise

    Noise is also a function of time. The longer you wait, the more noise accumulates (basically the CCD is an integrator).

    So it makes no sense to overexpose at ISO200 and hope to beat the results of ISO100, but because my 20D can overexpose by about 1 stop without loosing data, I can overexpose the image at ISO100 and then reduce exposure (basically reducing the gain) and essentially have the same result as if my camera had a ISO50 setting (which it doesnt). I never do it though, I consider the overexposure capabilities my safety margin for corrections.

    By the same account I can underexpose a ISO3200 shot (if there is not enough light, but I need to shoot handheld). When I then amplify the signal digitally, I pretty much have a ISO6400 shot (unusable for most applications).

    I did that exactly once:
    http://www.aguntherphotography.com/galleries/south_america/peru/cusco/xmas.html
    but not to the full extend of 1 stop (maybe 1/3). I had no tripod and it was night. I think it came out o.k.

    Anyways, this is how I understand the process. I could be wrong.
  • erich6erich6 Registered Users Posts: 1,638 Major grins
    edited November 9, 2006
    Folks,

    The thought about pushing then pulling does work and it's a good rule of thumb but that's all it is. Like any other rule-of-thumb you can't expect it to apply to all cases.

    Yes, it's true that more usable bits are available in the highlights. Yes, it's true this is not only because of the linear response of the sensor but also because noise fills in the low bits first (there's less SNR).

    However, you have to weigh that against a couple of factors:
    1. Longer shutter speeds can reduce your chance of getting a sharp picture. Sometimes a sharp noisy picture is a lot better than a blurry "clean" picture.
    2. Scenes with wide dynamic ranges will be harder to capture without blowing the highlights. Again, a noisy image with good tonal distribution may be tons better than a "clean" image with clipped highlights.

    So, consider the subject and the final product when making the shot. Bracket exposures if you can and shoot RAW to maximize flexibility in post.

    Erich
Sign In or Register to comment.