JPEG vs RAW: Which is better? Who shoots what?
Tom Foster
Registered Users Posts: 291 Major grins
I've made a quick post to hopefully help you decide if you're new to photography haven't already! Let me know if you shoot JPEG or RAW! I've included an image to show what a difference RAW shooting makes to editing.
JPEG vs RAW: What should you be shooting?
Feel free to share with anyone who might be interested
-Tom
JPEG vs RAW: What should you be shooting?
Feel free to share with anyone who might be interested
-Tom
My Scottish landscape photography site.
My Photography Blog.
My Popular Photos
- Photos of Edinburgh, Scottish Highlands and Islands, Fife.
My Photography Blog.
My Popular Photos
- Photos of Edinburgh, Scottish Highlands and Islands, Fife.
0
Comments
Most people who have been shooting RAW for a period of time know the advantages
of RAW and wouldn't go back to .jpg
...unless shooting conditions are such that .jpg does offer advantages not offset
by the ability to wring more out of a RAW file.
For example, a day's shooting at a road race on continuous may fill up cards so
rapidly that switching to .jpg does offer a practical advantage. If the conditions
(light) doesn't change much during the day, with the right settings there is less
need for a RAW file to work with.
As we see SD and CF cards with larger capacity, and cameras that will accept
these larger capacity cards, the problem becomes less urgent as long as the
higher cost of the cards isn't a factor to the shooter.
For the low volume shooter, or the shooter of static scenes, RAW just has to
be the choice, though.
http://tonycooper.smugmug.com/
+1 for Tony's comments.
I shoot sports (1750 last Saturday ... 9am - 5pm). Kept 750.
Had to be uploaded to my website, league website and newspapers on Sunday.
So I shoot jpeg. If I can't get crop/exposure/contrast/noise etc right within 40secs I delete it.
Usually spend 20-30 seconds per image.
I simply don't have the time to "work" images that need significant processing.
For all my other low number shooting its definitely RAW ... just because you have more information and you can do more with it.
www.acecootephotography.com
These days its all personal and family photography with an Olympus m4/3 camera that I shoot in-camera "vivid" JPG and I'm quite happy with those results.
A former sports shooter
Follow me at: https://www.flickr.com/photos/bjurasz/
My Etsy store: https://www.etsy.com/shop/mercphoto?ref=hdr_shop_menu
Raw is about rendering the print.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Recently I've been shooting Babe Ruth baseball games a week and averaging
150 and above shots in each game. Most of the games have started at 6:30 PM
so I get three innings or so of good light. While I can get some OK shots as I
keep moving the ISO up, the best shots are early in the game.
I do shoot RAW, but 150-200 RAW images don't use up a card in my Nikon D300,
and I only use an 8 GB card. I do carry a second card in case I go crazy, but
have had to use it yet.
It's the baseball shots that forced me into doing my post in Lightroom. As a long-time
user of Photoshop, I'm adept enough at doing my post an image at a time, but doing
150 to 200 images is too time consuming in PS.
LR allows me to develop and use synched pre-sets that pretty much obviate the need for
further tweaking. Post is much simpler now. I don't think I spend over that 20 or
30 seconds an image.
BTW, I'm looking for the 30 or so best images of the 150 to 200 to put up on
the league's website. The rest are kept, but not used.
http://tonycooper.smugmug.com/
Without a doubt Lightroom and Aperture both made shooting RAW with a large number of images MUCH easier to do! Before Aperture I only shot JPG when I was doing the racing. No way I'd shoot 1500 RAW images w/o a tool like LR or Aperture.
A former sports shooter
Follow me at: https://www.flickr.com/photos/bjurasz/
My Etsy store: https://www.etsy.com/shop/mercphoto?ref=hdr_shop_menu
You mention the difference in data bit depth without explaining really how relevant that is in both under/over exposed photos and recovery.
I also get the truth behind the comments on disk space but really -- in today's technology, how much of an issue is that really? You had it top of the list on disadvantages.
And the comment that raw has "Less contrast, less sharpness." is just plain wrong. A JPG and a RAW image both get exactly and precisely the same innate sharpness and contrast in that they both depend on how the conversion process is set up -- whether in-camera or in-computer. Indeed, you generally have more options and better tools in your computer, but there is zero difference if the same settings are used because both begin with the exact same pixels.
The comparison of images looking at sharpness is, sorry, just silly in terms of raw vs jpg in general; all you are comparing in those images are the post processing (whether post in the camera, or post on your desktop).
A much more telling and educational comparison is how you can recover highlights and shadows, or adjust bad exposures.
And as mentioned above, the need to post process and time that takes is to me the biggest deciding factor. I also shoot sports, but usually with pretty relaxed deadlines -- people turning out shots over a 10 minute halftime pretty much have time only to cull and label, post processing is just not a possibility.
I apologize if this isn't considered very constructive but I mean it to be -- I think your article could be adjusted to do more justice to the real issues, but as written would leave beginners with some wrong ideas.
Author "Color Management for Photographers"
http://www.digitaldog.net/
RAW+JPEG now that I basically shoot landscapes only. When traveling and shooting more in cities/street I'll primarily shoot JPEG and switch to RAW when I feel the need depending on the scene.
There's some opinion in there so fair comment, everyone's opinion will be polarised to some extent or another!
I was intending that to illustrate that the RAW has several times more information and later on tried to show how this made a practical difference in recovery of highlights in the sky of an image at the end.
The list of disadvantages wasn't intended to be ranked in any way. I did say 'File size is important. If you only have small memory cards and a small hard disk then you might prefer not having dozens and dozens of large RAW files.' which I would argue is true; it is important to some people. I also later clarified with 'Thankfully memory cards and hard disk space is cheap enough nowadays that space isn't a problem' which I suppose is opinion and is up for debate, I know some photographers who just have 2x8GB SD cards for instance. Seems crazy to me but guess it's not my right to judge!
Apologies if that was confusing, I meant less contrast, less sharpness in the image you get out of camera. You need to apply sharpness and contrast settings yourself on the computer to get the same look.
I wasn't comparing looking at sharpness, if you look at the image I was comparing highlight recovery in an over-exposed sky, I was saying there was a lot more detail in the highlights in the RAW image; the clouds were less pixelated, there was no banding in the sky, etc.
That's what I was intending on doing (in terms of highlights at least)!
Thanks for the comments though, will look at it and see if I can clear up some of the confusion!
Edit: Made a few changes to hopefully clear some of that up.
My Photography Blog.
My Popular Photos
- Photos of Edinburgh, Scottish Highlands and Islands, Fife.
I don't think we understand each other at all. Raw's deeper bit depth is significantly responsible for the ability to raise shadows and recovery highlights, or affect overall exposures. Even mediocre sensors will give you 1-2 stops of correction of poor exposure without significant detriment in RAW, and better sensors 3 or more stops. Try doing that with an 8-bit JPG, there's just not enough data there. It's why clipped highlights or blacks on the LCD of a DSLR (which come from the in-camera JPG, at least on Nikon) are not necessarily really clipped in the raw.
I'm not sure what you mean by exposing differently for raw vs jpg, other than you better be dead on for quality jpg's SOOC, whereas you have flexibility to change later in raw. While some aspects (notably white balance) are baked into the de-mosaic process and are a significant advantage of raw itself, the dynamic range you can recover is fundamentally in the 12 to 14 bits that most cameras provide as opposed to the 8 bits in jpg. While compression and linear vs. stretched coding space affect this comparison, it is not really wrong to think of each bit as a stop of available shading (since it can encode twice the number of shades). That's a huge difference.
Bit depth is simply dividing up the numbers differently (finer), it doesn't make more or less actual data per se or affect exposure.
Correction compared to the baked JPEG yes, that's not due to bit depth. The camera simply did a poor job of rendering the image FROM the raw data and you didn't. The exposure is always affecting the raw whether you ask for a JPEG, a raw or both.
http://digitaldog.net/files/ExposeForRaw.pdf
If you expose for the JPEG, you under expose for raw, simple as that. Try it. ETTR for the raw for optimal exposure and you've blown out the JPEG because of how the camera processed that data.
Author "Color Management for Photographers"
http://www.digitaldog.net/
http://www.rags-int-inc.com/PhotoTechStuff/TonesnZones/
Author "Color Management for Photographers"
http://www.digitaldog.net/
Let me take a step back and see if we agree.
And let's ignore the fact that JPG's are typically compressed in a lossy fashion, that's not the issue I am addressing.
If we are comparing editing a JPG as opposed to a RAW image to recover highlights or shadows (basically to correct exposure), or to correct white balance, the primary issue is that this involves compressing some areas of the histogram and expanding others, AND the original JPG also involved doing so.
To make up an example - in the original raw image maybe a stop's range of highlights that had 1024 discrete values. When the JPG was created, it got compressed into 6 different values (in this contrived example).
Now you decide to edit the JPG and raise the exposure, and the new histogram requires 30 different values for that same stop's range. The program has to interpolate the 4 tones in between each one you have in the JPG.
If you are correcting it from the RAW, you have all 1024, so to get 30 you have plenty of detail, nothing (relative to what "fits" in the JPG) is lost. So the resulting new JPG from RAW will have more detail in that stop's range than the one edited from the JPG.
Now one could argue it is the opportunity to re-do the tone map (from linear to log) that allows you to recover the detail, and not additional bit depth. I'd agree with that. But the linear to log mapping (because of the end that is stretched) HAS to have more bits in RAW to keep from loosing detail in the low end, of necessity. If you had an 8 bit RAW image, then EVERY conversion to JPG would loose data except at the very highest highlights.
So raw is going to have more bits than JPG simply to be usable, and having more bits in all (or almost all) areas of the JPG's histogram, means that if you need to shift the histogram for exposure you preserve detail better by going back to the RAW than editing in JPG.
This really doesn't have anything to do with the fact one COULD represent the same number of stops in JPG, but rather that to be a viewable, usable image, the JPG has compressed and/or expanded areas and thown away data as it did so.
Now if instead of a JPG you had a 16 or 32 bit TIF, even in the gamma mapped version you would still have plenty of stops in all tonal ranges to edit and have no need to go back to the RAW (at least if you didn't do repetitive compress/expand in a destructive fashion). But JPG's have 8 bits, and it just isn't possible to preserve that detail there.
A good discussion of this is in (the late) Bruce Fraser's Real World Adobe Camera Raw (I think CS3 was the latest but not sure), as he has example histograms showing the gaps created.
Look, it is real simple. You seem to be suggesting that bit depth and exposure are somehow related. I don't agree. I've asked you to provide some outside reference to back this up and you haven't. I have provided outside references that specifically suggest they don't have a relationship. I'll ignore it as it has nothing to do with this debate. The data capture in the raw capture is what it is. There's no highlights to recover, they are either blown out or they are not blown out. If you expose the raw such they are blown out, there's nothing to bring them back. The JPEG is a totally different animal. The same raw data that has no blown highlights can produce a JPEG that does have blown highlight due to how the camera rendered the raw.
You can take a raw with no blown highlights and mimic this exactly and easily! Just move the exposure slider in say ACR\LR and you can do exactly what the camera did to produce the JPEG yet the raw didn't have blown out highlights. The rendering is key here. The raw data was all there, the highlight data was all there and by simple moving a control for rendering (exposure) it looks blown out. There's no highlight recover per se, the highlights had detail due to how you exposed in the first place. NONE of this has anything to do with bit depth! If it did, by converting from high bit to lower bit would affect the rendering and exposure, that' isn't the case. Because bit depth and exposure have no relationship to each other. Which plays no role in terms of exposure. It's just the division of bits (numbers). Which plays no role in terms of exposure. It's just the division of bits (numbers).
Which plays no role in terms of exposure. It's just the division of bits (numbers).
There's no detail to recover per se, there are differences in rendering the data you started with from raw. Which plays no role in terms of exposure. It's just the division of bits (numbers).
Yes it has more bits, which has nothing to do with exposure.
Bruce was a dear friend and business partner. I've got all his writings. It's your task to find something, somewhere like I did that explains your POV that somehow, bit depth and exposure are linked. I await that proof of concept.
Author "Color Management for Photographers"
http://www.digitaldog.net/
The ETTR all originates from the early 2000 range, the first time I saw it was an adobe white paper (by the same Bruce Fraser) who explained it in terms of the linear to gamma mapping of the resulting tones (it was copied endlessly then on blogs, etc.)
And it makes perfect sense.
However, I think in practice it is (or should be) used as guidance more along the lines of "if you are going to err, err to the right". And it's really not good advice for those who might depend on viewing the LCD to generally judge their image to have it blown out.
Also bear in mind the advice dates to a time when a typical camera offered 6-7 stops of DR, and today we get 12-14 stops on a decent sensor.
Just to exacerbate the issue, there's a lot of people (I am one) who without much science behind it swear that on later Nikon sensors at high ISO's, it is easier to recover shadows than highlights, leading them to expose to the left. I shoot sports (generally manual) and when dialing in for, say, a soccer field that ranges from ISO 3200 to ISO 12,800 for good exposure, I pick 3200 and adjust the others in post. ETTR would say to dial in 12,800. One day I need to methodically test it, but I feel that doesn't work as well.
Note that ISO setting is not the same as exposure by shutter or aperature -- one is real light and real filling of the sensor site well; the other is either analog or digital gain in the sensor readout.
Further conflicting with ETTR is Nikon Active-D lighting. It is a combination of exposure change (reduced exposure) and software, whose purpose is to be able to recover highlights better in the raw conversion. But it exposes to the LEFT not right to do so. Why? I'm not sure (and this is real exposure not necessarily changing ISO, but more normally shutter or aperature). I've never seen a good explanation from Nikon of why this is done, or why it works contrary to the ETTR math (and I don't use Nikon software so have little experience with it and how well it works).
I guess my point is all these represent relatively subtle edge conditions. In general with such huge dynamic ranges today, if you expose for the middle (i.e. "right") and miss, you can usually adjust a raw image through reasonable ranges (a stop or two). Do you have a bit more room to the left than right? Maybe, but it's pretty subtle. Try it.
Now if you have an old D70...
That will not alter the dynamic range of the capture device, that's a fixed attribute. ETTL only provides less data in the last stop, more noise because less photons were recorded where very few reside in the first place.
But you're digging into a deeper and unnecessary area which gives me the idea you don't have any references to back up what I believe was your initial point we disagree about; bit depth having a role or relationship to exposure. Again, for the 3rd time, you obviously believe this is so yet you are unable or unwilling to provide a reference to back this up. Before we go on, can you do so? Where did you get this concept and can you provide proof of concept.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Would you be so kind first as to state what you think is incorrect, specifically, to be more clear:
- For any given JPG, if you need to change exposure, white balance, recover shadows or highlights, you can do so better by going back to the RAW, than from the JPG.
- That the above is true is permitted by having significantly more bits in the raw data than the JPG.
To make sure we do not argue past each other, what is wrong with those statements, specifically?
Are you saying you really can't recover bad exposure better from raw than a JPG?
Are you saying a raw image of 8 bits would work as well?
Neither of these are true, the math just doesn't work for either one.
But I think you know that, so there's some twist on what I am saying that you have issue with and I am missing it.
As to the CS3 book, I think I referenced it, but if you want more specificity page 7 "Using all the bits".
Yes, that was off on a tangent, but it spoke to the (separate) issue that one exposes differently for JPG vs. RAW.
The above statement is perhaps why we are not in sync, as I think you do know how this works. Bit depth isn't required to represent a range of exposure, bit depth is necessary to represent the steps or shades between the extremes of exposure. Less bits, less steps.
Those steps get distorted in the linear -> gamma mapping, and in mapping to JPG are (almost) always reduced. To have adequate bits for the low end (at least if you want any reasonable detail there) you need more bits int he raw than the JPG.
A bad exposure can be more corrected in raw than in JPG because there's data in the RAW that wasn't in the JPG.
No it's not! And you've failed to prove that idea. Again, I've provided at least one and can find others' that state this isn't the case. Did you read Rag's article?
Your idea that the two have a relationship is like suggesting that a car with a larger gas tank gets better mileage. The two have no relationship to each other any more than bit depth and exposure are related. If you're so sure they do, prove it.
Author "Color Management for Photographers"
http://www.digitaldog.net/
One absolutely does if one is exposing for optimal raw data. You can test this yourself easily and I provided another article on this subject. Shoot raw+JPEG and use the cameras suggestion for the JPEG. Now ETTR up to say 1.5 stops. The JPEG is totally blown out. The raw isn't, you simply normalize the initial rendering by altering the Exposure slider (what you kind of incorrectly refer to as highlight recovery). IF indeed you didn't' expose to the point of sensor saturation, you now see highlight detail where you didn't prior to this rendering edit. The raw wasn't over exposed and further, you've got far less noise in the shadows. So explain to us how the original exposure which produced more noise in the raw but no blown out highlights in the JPEG is ideal exposure when upping it 1 or 1.5 stops didn't blow out the highlights in the raw AND provided less noise? All while blowing the highlights out of the JPEG.
Again, if your goal is optimal exposure for raw, you don't exposure as if it were a JPEG. That's the entire idea behind ETTR. And none of this has anything to do with bit depth! The over exposed JPEG from the optimal raw exposure is the same when you shot raw+JPEG.
Author "Color Management for Photographers"
http://www.digitaldog.net/
If the raw image had identical number of bits, and still went through a log scale compression/expansion, then clearly some tonal steps are discarded (at the high end), and some have to be created (either interpolated for fake steps not present, or not interpolated for banding, for the shadows). Also almost a quote form Fraser, same section.
COULD you expand the shadow area into the gamma corrected scale by just interpolating? Sure. In fact you could start with a raw image of 6 bits instead of 8, and still render an image.
But to retain the gradations of shadows as they are stretched in the conversion the linear capture must have more bits than the final gamma corrected image, i.e. more than 8.
If you tell me that's not true, I just don't know how to continue the discussion.
Explain to me why Nikon's Active-D lighting, which is for its own raw converter, does the opposite?
(Honestly, I don't know -- I really would like someone to explain it -- I know it to be true, but not why).
You own the book? I do. What page does Bruce equate bit depth to exposure? The gaps have nothing to do with exposure.
You originally wrote: You've yet to do so yourself (explain bit depth's relationship to exposure). You've also ignored my request to explain Rag's article, he's wrong?
I'm going to bed. You have all night to find a reference in Bruce's book or elsewhere that explains what you asked the OP to explain which he didn't because both of us don't believe there's any relationship and you think there is.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Not relevant and another distraction on your part. It's really simple, let's stay on track if you can. Please provide a reference that backs up your claim of a relationship between exposure and bit depth. Bruce or otherwise.
Author "Color Management for Photographers"
http://www.digitaldog.net/
And I never said bit depth is related to exposure, but it is related to exposure recovery in badly exposed images. If you continue to misunderstand what I am saying, there's no hope of me explaining.
In fact the article you referenced supports this -- it shows in several places the nature of shadow representation. In his last chart, there is 1.1% of the data represented in the illumination column, where 10% of the tone and 25.5 (of 255) of the values. That's about a 10:1 expansion. To retain the level of detail represented in the original data in that zone, you need then roughly 10 times the tonal steps, or about 3-4 more bits than the 8 bit target. And hey -- 8 + 4 = 12. Happy coincidence?
Here's what I am saying -- as you RECOVER from bad exposure in the raw conversion, or simply emphasize shadow recovery, this expansion which is acceptable in a proper exposure, now shifts upward. So what was zone I in his chart may need to expand to cover both zones I and II.
Without adequate bit depth in the capture, it means in this example the number of tonal steps that was adequate for zone I, now must cover both zones I and II, resulting in banding or interpolation and associated data loss unless you have more data than represented in the JPG to go back to.
With adequate bit depth in the capture, as you shift exposure to the left in conversion, you still have adequate tonal steps to represent the new zone position (using his tie to zones).
The bit depth issue is about tonal (exposure) steps, not tone. You can represent any range of values you want mapped to any number of bits (well, at least one). What you need bit depth for is to represent the steps in between. Without those steps, detail recovery in bad exposures (using this articles definition where the mid-point misses the middle zone) has inadequate data.
As to Fraser, sorry when I said:
>>As to the CS3 book, I think I referenced it, but if you want more
>>specificity page 7 "Using all the bits".
it was in a dark room and it's page 9, at the bottom the two paragraphs there. I'm too lazy to type it all in, but in part it says:
>> More bits translates directly into editing headroom, but the JPG format is limited
>>to 8 bits per channel per pixel.
"Headroom" in part means ability to shift exposure, recover shadows and highlights.
The article you quoted appears, as much as anything else, to take a coincidence that computers use binary and lighting's visual response is log 2, and make something significant of it. I have no complaint with what he says at all, I just think it's an explanation of a controversy that doesn't exist. And honestly I think this thread is the same -- I doubt we have any real disagreement of how any of this works.
If you're saying more bits aid in less banding or data loss after applying edits to the data, I'd agree of course, there's nothing new here.
If you're saying by altering an 8-bit per color JPEG or even TIFF due to exposure issues will produce more rounding errors and data loss, or combing in a histogram compared to doing this with raw, no argument, nothing new here. Agreed. Agreed. But this is true for and and all kinds of edits. Any edit of existing pixel values produces rounding errors, it's less an issue with high bit data. Again this isn't anything new at all:
http://digitaldog.net/files/TheHighBitdepthDebate.pdfl Agreed, no agreement, it's outlined in the URL I just posted too. But this has nothing to do with exposure!
Maybe we're going around in circles and some of your text is somewhat ambiguous and unclear (to me) but what you initially wrote to the OP was clearly this: Maybe I'm reading this incorrectly, maybe it wasn't written as clearly as it could be. As I tried to explain, the bit depth has no direct relationship to exposure, but if your exposure or any other attribute of the data needs editing, that editing will introduce data loss, more bits reduces the possibility of banding of the data due to the edits. More bits doesn't translates directly into anything to do with exposure, it does translates directly into fixing any problems that require the values to be changed by the user.
Author "Color Management for Photographers"
http://www.digitaldog.net/
>>> You mention the difference in data bit depth without explaining really how relevant that is in both under/over exposed photos and recovery THEREOF.
You said several times "nothing new here". I never intended to be saying anything new. I felt the original article did not do justice to the ability of RAW shots to be corrected for exposure and white balance, whereas JPG's cannot be. Part of that ability (especially WB) is baked into the transform, so clearly when rerun is more able to affect changes. Part of that ability is the bit depth that, on re-conversion with a different target exposure, provides adequate tonal steps.
Nothing new here -- we agree. I didn't post to say anything new, but rather to comment on what I thought might improve the posted document. Sorry I ever brought it up.
I think you're lumping two different areas into one which doesn't aid those who are not up to speed on this topic. One area is bit depth and it's effect on data loss and banding as you point out with Bruce's combed histogram. This is true for a raw or a TIFF or any other file format that supports high bit data.
The other is the major difference between raw and rendered data ( in this context, camera JPEG). The raw data could be 8-bits per color and the ability of raw data to be corrected for exposure and white balance, (whereas JPG's cannot be to the same degree) wouldn't change at all, bit depth isn't a factor in these differences.
This goes back to our discussion of exposure and raw data. There's only one exposure and there's only one raw file whether you ask the camera to provide a JPEG, a raw or a raw plus a JPEG. One reason we can use an ETTR technique and not blow out highlight data but would with JPEG is the reason we can ignore in-camera WB settings but can't with JPEG. It's un-rendered raw data and the bit depth has no role here in terms of the flexibility we have with raw but not a JPEG. JPEG is baked and often not well. Raw in unbaked and provides all the data necessary to produce a better JPEG which is bit depth agnostic. It's great that raw data is high bit. But that has nothing to do with exposure per se or our ability to control the rendering of raw. Even by properly exposing the raw which would totally over expose the JPEG. The camera processing isn't very smart, it isn't handling the process for JPEG properly for optimal exposure of the raw. Again, bit depth has no role here in this disconnect.
Author "Color Management for Photographers"
http://www.digitaldog.net/