RAW exposure vs. High ISO
SloYerRoll
Registered Users Posts: 2,788 Major grins
I've noticed that when increasing exposure in any raw converter I've used that the corresponding noise increases. (no duh)
Questions are:
-Jon
Questions are:
- Does setting your camera to a higher ISO give your picture more or less grain than increasing your exposure in raw conversion?
- I'm pretty sure there isn't, but is there better raw converter application than ACR or Nikon capture NX in regards to keeping the noise down? I normally use noise ninja via PS plugin when shooting a high ISO. I just want to make sure that I'm giving this pugin the best possible file to work w/.
-Jon
0
Comments
2. IMHO Bibble, but that's largely because it has NN built-in.
http://www.chrislaudermilkphoto.com/
Underexposing by 2 stops and then pushing a RAW file +2EV in a quality RAW processor will have a similar effect on noise as raising the ISO a similar amount and then using the meter-indicated exposure. In both cases, you would be using the same aperture and shutter speed and thus the same amount of light would be hitting the sensor. This will generally lead to the same signal-to-noise ratio in your image.
Now, which technique is actually better has been the subject of some very technical discussion on dpreview involving the smart, but somewhat controversial author of Capture One and I would say is not clear.
A purely theoretical analysis says that setting the ISO higher should result in a better result because the camera can do some analog amplification before the signal is digitized which should theoretically result in some finer tonal capture (e.g. less posterization) than purely doing digital amplification in a RAW processor. But, there are some experimental results that suggest that a really good RAW processor might actually do better than the ISO amplification in a camera.
I've read these discussions carefully and have not been able to reach a clear conclusion for myself. I still raise the ISO in my camera as I've always done.
If you find these discussions in dpreview (sorry I don't have a link handy), one surprising conclusion is that Adobe Camera RAW is NOT good at pushing the exposure multiple stops (something to do with the way they implement the exposure slider) - other RAW processors are much better at it.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
If I want to maximize noise, on the other hand, I underexpose at high ISO and then increase the exposure setting in ACR. For instance, if I intend to convert the shot to B&W I may want additional noise.
That depends on the camera you're using. I have no reservations bumping the ISO of my Canon since I know that I can get better results than from increasing the exposure slider in Adobe ACR.
You didn't specify what type of shooting you do but if you want to get the best working files possible before processing, you can obtain much better results shooting to the right (of the histogram) and then nudging the exposure slider down to decrease the noise levels.
Nikos
This gives me a starting point to start doing some personal tests.
Nikos, I still shoot w/ a lowly D50. While at low ISO it takes fantastic shots! High ISO's are a whole other beast though.
Thanks for the option of bibble Chris. I tried it on a Mac and the UI was pretty clunky. Maby I'm just to used to bridge now.
MW, thanks for the info. I'm gonna start playing around w/ noise & b&w now. So much to learn....
I looked up the DPReview threads on ISO vs. pushing exposure John. Great stuff!
I'm still up for any other comments from the tons of experts out there in regards to ISO's.
-Jon
Wow, I've got to read this, because I can't believe a RAW processor could truly do better than ISO (amplifier gain). It's almost like saying that unsharp mask is better than more megapixels and sharper lenses. It just doesn't make sense from a theoretical point of view. But I'm not close minded... if there is a good arguement I'd like to read it.
Unfortunately dpreview only let's registered users use the forum search, and they seem to have hidden their forums from Google. If you do come across this link, would you mind posting it?
Absolutely! With linear encoded data, where the first stop of highlight has half of all the tones, anything but correct exposure as you point out (just shy of highlight clipping) means you get the most true data in that last stop of shadows where all the noise lives. If you have a 12 bit file with six stops of range, the first stop of highlight has 2048 steps if properly exposed, the last stop only 128. Each stop has half the bits of the last. If you under expose, that last stop now has far less than 128 bits of good data. More noise.
Author "Color Management for Photographers"
http://www.digitaldog.net/
That has not been my experience. It makes logical sense, but in practice I've gotten cleaner images by going to ISO 1600 rather than pushing the ISO 400 shot.
I usually take a bag of salt with me on my rare forays into the DPR troll den.
SloYerRoll, yeah, the Bibble UI isn't the prettiest. That's a side effect of being available on Win, Linux, and Mac (how many converters can make that claim?). They are promising improvements in v5; not sure when it's coming out, but my gut feeling is a several months timeframe--they aren't saying. I prefer the ugly UI, good workflow, and output over a pretty UI, crummy workflow and output.
http://www.chrislaudermilkphoto.com/
I think that is right for some RAW processors and apparently others can push as effectively or better than raising the ISO (if you believe the test results). Which RAW processor are you using?
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
The converter can only do so much with the data its fed. Pushing isn't providing any more data to work with. In fact, I don't really know what everyone is referring to with pushing. Altering exposure and other rendering tools isn't pushing the data, its altering the rendering like all the other tools based on the data you feed it.
If the difference between getting the shot due to say vibration of handheld shooting means you have to use high ISO or get a blury image, guess what's the better of the two options?
Author "Color Management for Photographers"
http://www.digitaldog.net/
"Pushing" just means amplifying the data recorded from the original sensor to make it appear brighter than it was actually recorded.
This was a bit radical to me when I first encountered it, but think about it from a system design.
Compare two shots:
ISO 1600 125th, f/2.8, then developed normally in a RAW converter
ISO 100 125th, f/2.8, then pushed 4 stops in a RAW converter
Because both shots are at the same shutter speed, same aperture and same sensor, the same amount of signal is actually recorded on the sensor. You will have the same amount of signal to noise ratio for both shots because you've recorded the same number of light photons on the same sensor. In fact, you will have identical data captured on the sensor in both shots even though our traditional thinking (and the camera's meter) says that the ISO 100 shot is 4 stops underexposed.
In the case of the ISO 1600 shot, the camera has amplified the signal before putting it into the RAW format (e.g., it's not really a RAW capture as it's clearly been modified). Depending upon the camera design, some of the amplification may be analog before digitization and some may be digital amplification (just shifting the numbers). Their also may be some sort of signal conditioning that occurs even for RAW captures. The camera makers don't seem to volunteer how they do this and it's likely that they each do it somewhat differently.
In the case of the ISO 100 shot, the camera just dutifully puts the unamplified signal into the RAW file because you didn't push the ISO setting. When you push the image 4-stops in a good RAW converter to brighten it up, you are just doing digital amplification of the signal, similar to (but not the same as) what the camera does when you set it to ISO 1600.
So, which technique results in a better result is really a competition between the two different ways that the signal is amplified and conditioned. One is in-camera. One is in a RAW converter.
ACR is apparently not good at this kind of "pushing" because (according to one of the authors of a competitor, Capture One) it has an algorithm for the exposure slider that doesn't work the way you'd like it work for these kinds of extreme manipulations. Capture One is apparently pretty good at this type of pushing.
As I started reading about this issue and seeing some of the sample images produced both ways, I started with a strong bias that the camera must have an ability to amplify the analog data before digitization and that must give it a true advantage over just pushing the digital data in a RAW converter. But, some of the sample images I saw that were shot many stops underexposed and then pushed in Capture One blew me away. They still obviously looked like a high ISO shot (like one would expect), but they looked as good or better than the ones shot at an equivalent number of photons captured, but a higher in-camera ISO and then not pushed in the RAW converter.
Further, when shooting at the reduced ISO and pushing in the RAW converter, you never have to worry about blowing highlights like you do when you turn up the ISO because you get to control exactly how much amplification is applied to the RAW data whereas the raised ISO has pre-determined the amplification and that can blow highlights.
Now, I haven't yet gotten myself to the point where I don't raise the ISO and instead "push" in my RAW converter (partly because I use ACR which is lousy at pushing), but I am very intrigued by the technical discussion and the possibilities. I wish I knew more about how the camera does the in-camera amplification for a raised ISO.
If you want to get your mind tweaked by the discussion, feel free to go read one of the discussions in dpreivew here.
To summarize, I'm not yet arguing that one should leave the ISO at it's lowest setting in your camera when you have low light and just underexpose, but I do now understand that there is some merit to that argument and perhaps some merit to a competiting way to "push" underexposed shots. Raising the digital ISO is just the camera's method of pushing an underexposed shot. There can be other ways to do it too.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
There are actually two seperate issues: dynamic range and noise.
Look at it this way: at ISO 400 my 5D has a dynamic range of just over 8 stops. If I underexposed by 2 stops and boost the exposure during the raw conversion, I am not increasing the noise, but I am wasting 2 stops of my dynamic range. Now if my scene only has 6 stops of dynamic range to begin with that isn't going to be a big isssue. But if I have a lot of shadow detail I want to recover, then I am much better off shooting at ISO 1600. While I get more noise from the higher ISO, I am capturing a full 8 stops of contrast and I will have much better shadow detail. If the shadow noise turns out to be too big of an issue, I can always get rid of it by ramping up the black point or come careful noise reduction.
You are right. There is very little written about how the camera does it's RAW capture, signal conditions or high ISO amplification. That information is probably considered proprietary.
Dynamic range is definitely an important characteristic and I'm intrigued with your example. I started with your exact position, but the more I've considered the physics and electronics of what is going on in the camera, the more I'm not quite so sure.
If you compare two shots from your example:
ISO 400 1/125th, f/2.8 - developed normally in a RAW converter
ISO 100 1/125th, f/2.8 - then pushed two stops in the RAW converter
Both actually start out with exactly the same data on the sensor because the ISO has nothing to do with the actual light photons that get to the sensor. Both are at 1/125th and f/2.8 so they capture the exact same signal on the sensor.
Given that everything else that influences noise, contrast, dynamic range, etc... is identical in the two captures, we should be starting out with exactly the same sensor data.
If that's really the case, then the only difference between the eventual result with the two shots above should be caused by the post processing that happens after the RAW data is captured on the sensor. In-camera processing vs. RAW processor post processing.
Both start with some sort of analog-like value out of each sensor cell that is proportional to the number of photons that hit that cell. In the ISO 100 example, the signal is conditioned (yes even base ISO RAW captures are conditioned by some or most camareas) and then digitized and stored as the RAW data. The RAW post processor then amplifies the signal digitally to bring it to the desired brightness.
In the ISO 400 example, that signal is conditioned, amplified (by a factor of 4x because of the ISO 400 setting) and digitized. We don't know what order these operations are executed in or how the algorithms work as the camera manufacturers don't seem to share that info. Lastly, it's stored as RAW data. At this point, you can see that it isn't really very RAW as it's been processed already. It's more RAW than a JPEG, but not really RAW data off the sensor since it's been amplified and conditioned.
Now, about that dynamic range. If both shots start out with exactly the same data on the sensor (same number of light photons hit each sensor cell), it's less obvious to me how one has to have better dynamic range than the other. In both cases, we're just amplifying the signal that was originally captured by a factor of 4x. In the ISO 100 case, we're amplifying it digitally in post processing software. In the ISO 400 case, we're amplifying it in the camera by an unspecified algorithm in the camera.
Could these two different methods lead to a different end-result for dynamic range? Yes, they could. Do we know for sure which one is better? I don't. Since it seems like it depends entirely upon the in-camera amplification process and the RAW processor amplification process, I think you could only determine the answer for your particular equipment, software and shooting conditions by testing your equipment and software.
I know this thinking is highly controversial and runs against the default expectations. I'm a techno-geek that likes to try to understand the physics and math and I've been convinced that the default thinking (that the ISO 400 shot is always better) isn't necessarily always true and that there is an opportunity for the ISO 100 shot to be as good or better depending upon a bunch of system variables (in-camera algorithms and RAW processor abilities). What I'm waiting for now is experimental results under controlled conditions that measure the performance of specific cameras and specific software so we could get real experimental results of the different cases. Maybe sometime I'll have the time to run that on my configuration.
What gets us into logic trouble when we think about this is that we tend to think that turning the ISO up to 400 makes our sensor more sensitive and makes it behave like a sensor that has a base ISO of 400. I wish that were the case, but as we've described above, it's not. If it were the case, that sensor would produce much higher dynamic range and lower noise at ISO 400, 1/125th, f/2.8 vs. the ISO 100 sensor, 1/125th and f/2.8 because the ISO 400 sensor wouldn't require amplification of the base signal and the ISO 100 sensor would require 4x amplification. But, this isn't the case. Changing the ISO doesn't change the sensitivity of the sensor at all. It just instructs the camera to do in-camera amplification.
Anyway, I find this a very fascinating technical area that needs more experimentation and perhaps a new generation of tools to further enhance high ISO shooting.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
Good point! Just because they appear to be the same, there could be a lot going on here that affects the data once we get hold of it.
The discussion about this may be on Luminous Landscape, not DP Review (for those that didn't find it). But this should get you to LL:
http://luminous-landscape.com/forum/index.php?showtopic=17706&st=0
Author "Color Management for Photographers"
http://www.digitaldog.net/
So then, where does the noise come from? There are three possible sources for noise: noise in the photon->charge conversion, noise in the amplification, and noise in the D/A conversion. Lets start with the D/A conversion because that is the easiest.
When I say my 5D has about 8 stops of dynamic range, that is actually a statement of the dynamic range of the D/A converter. With the exception of the 1DmIII, I believe every DSLR Canon ships these days has essentially the same 12 bit converter. My guess is that the S/N of that converter is between 55dB and 60dB the extra couple of bits have likely been thrown in to limit quantization noise. In practice when one looks to see the darkest resolvable features they are a little over 8 stops down (I think dpreview came up with 8.3). One way or another, it is the noise in the D/A converter that determines the dynamic range of the camera,
So then, what about high ISO noise? The D/A converter can't realistically be the source of it because if it was the dynamic range of the sensor would drop by a stop every time you bumped the ISO by a stop. That is clearly not true. And in fact, it can really be the photosite charge either for exactly the same reason. The actual relationship between ISO and noise floor (as measured both in my experience and by dpreview) is nonlinear. The only candidate in the system to explain that behavior is the amplifer. When you crank the ISO, you crank the gain on that amplifier and it gets noisy.
This is a concept I'm interested in. Is there anything that any sensor does, or can do, to process the analog signal before it goes digital, and is this at all related to why Canon is "better" at high ISO speed settings?
There are a number of things that contribute to noise in digital images and there are many design parameters that determine who well a given camera performs in this regard. I don't claim to understand all these, but I'm dangerous enough to use Google and read a lot. The kinds of contributors to noise I've read about are:
- Dark Current Noise
- Photon or Shot Noise
- Amplifier Noise
- Pixel non-uniformity
- On-chip Read-Out Noise
- Off-chip circuitry Noise
- Electronic inteference
- Amount of exposure
Here's a picture from one article that helped me visualize some of these noise sources:Other interesting articles related to noise:
- How digital cameras work by Thom Hogan
- Understanding CCD Read Noise
- What is noise
- Shot Noise
It does appear that Canon is doing a better overall system design than Nikon right now with respect to noise. I suspect it's not just one aspect of the system, but rather a bunch of different improvements all added together.My layman's understanding of the some of the noise contributors is as follows:
- There's some variable dark current that flows in photo-sensitive semiconductors and is related to temperature and is independent of how many photons strike the sensor. This leads to innacurate measurements. Astronimical cameras built for really long exposures use cooled sensors to reduce dark current errors.
- There are some physical and electronic tolerances from one sensor cell to another that make it so each cell is not equivalent to the others.
- Thermal imbalances across the sensor can cause different characteristics in different spots in the sensor.
- Quantum physics is a statistical process so at small numbers of photons hitting a given sensor cell (what you get in low light), the statistical differences from one cell to another that should be receiving the same amount of light start to become relevant. This is called Shot Noise or Photon Noise. This is a property of light, not of the camera. The problem becomes less of a problem at larger signal levels.
- When it comes time to read the charge from a given sensor cell, the amplifier reading the charge has both a possible electronic tolerance and a variation from one read to the next that causes a variation in the output result even if the input is the same each time. This is sometimes referred to as read noise or amplifier noise.
In all cases, noise becomes more visible when you amplify a small signal to try to make it more visible. So, if you take a small amount of light and amplify it (an ISO 1600 shot taken in dark conditions) to make it look normal, you are amplifying the noise along with the desired signal and the noise becomes a lot more visible.If you suddently turn on a bright light and take the same shot at ISO 100, you see a lot less noise because you don't have to amplify the signal from the sensor nearly as much and thus the noise isn't amplified either.
This is referred to as the signal-to-noise ratio. For situations with a relatively fixed amount of noise, the higher the signal you can capture, the lower the noise will appear to be in relation to the signal. If your signal is small, the noise will appear larger relative to the signal.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
I did a little research on the topic. Here is an article on HDTV cameras that discusses the difference between CMOS and CCD sensors. This article may be a bit speculative, so don't take it as gospel.
http://hdtv.videotechnology.com/HDTV-CMOSvsCCD.htm
Here is the gist of what they have to say:
For television cameras and many uses of DSLRs (long night exposures being an exception), the dominant source of noise is called read noise (there are other types of noise in a CCD including dark current noise and shot noise). The gist of the article is a somewhat speculative observation of a fundamental limitation of CCDs.
CCDs are read one pixel at a time; the pixels are clocked off the chip by row and column into a single amplifier and A/D. So to read a 10MP sensor in 1/5 of a second means clocking the data out at 50MP/second. Their obeservation of real CCDs in production is that the read noise increases with the pixel clock rate (which is not overly surprising). More than that, their observation is that for a fixed clock rate the noise performance of the best available CCDs have not been improving over time implying that perhaps the best we are making today is as good as it is going to get. So, given the best quality electronics there is a trade off. If you increase the resolution or increase the frame rate, you also increase the noise.
It appears that CMOS technology allows a level of parallel processing which means that its noise does not increase with resolution and frame rate. Unlike in a CCD which funnels all the pixels to a single external processor, CMOS sensors process the pixels locally on chip which dramtically lowers the read rate and correspondingly the read noise. So, when building cameras simultaneously capable of high resolution and and high frame rate it appears that CMOS may have a fundamental advantage over CCD. While there may never be a CMOS sensor as low noise as the one in Hubble, there may never be a CCD capable of 10 MP at 10 fps and ISO 1600.
Last time I checked, 1's and 0's came out as 1's and 0's no matter how fast your send them through electornic mediums.
The rest of your dialog does make perfect sense though.
A CCD is fundamentally an analog device--its output is not 1s and 0s but rather a continous voltage (actually charge) which corresponds to the brightness of the pixel. If you treat the CCD as digital then each pixel is either on and off; you get no shades of grey. In a camera where you want greys the conversion of the voltage to numbers which represent brightness is done later in an analog to digital converter.
Thanks L.A.
Thanks for that. I asked about analog because I figured that was the one stage where you couldn't replicate the result in a raw converter, and it was mentioned in the earlier post that CMOS had a transistor at each photosite that could amplify, and in another thread I had used analog audio amplifiers as an analogy (cheap amp, OK sound; expensive amp, much cleaner sound), so a light went on in my head about the possibility that analog pre-processing could be part of the difference. Now I'm learning here that CMOS design could be part of that, and Canon certainly went with CMOS while Nikon uses CCD (except the D2X). All food for thought.
Canon makes their own sensors and has a large research organization designing them. Somebody in Canon's research arm decided quite some time ago that CMOS might be the way to go and invested a lot of work in developing it. Similar to USM and IS lenses, Canon made a long term bet on a technology that is paying off now. My guess is that many of the critical leaps in their CMOS design are patented and as a result not available to any other manufacturer.
Nikon does not have their own sensor fab so they have to partner with another company to design and build sensors. It is not at all clear that using CCDs is a choice for Nikon; rather it is what is available to them.
While I certainly don't have any inside information or insights as to what they are doing, I have strong suspicions that it's the processing of that amplified data from the CMOS chip going through the DIGIC chip that ends up with the cleaner high-ISO data vs just pushing low-ISO data in a converter. BTW, in answer the question a couple pages ago, I primarily use Bibble for conversions, but have used ACR 2.4 (as far as I can go on CS); I've also played with a few others. I haven't seen any that jumped out at me with equal shadow detail quality pushing an image vs devloping at about the indicated exposure.
http://www.chrislaudermilkphoto.com/