CS3 Smart Sharpening vs USM? Ideas, Advice or Help?
CAFields
Registered Users Posts: 25 Big grins
Hello,
I've read this very informative thread:
http://www.dgrin.com/showthread.php?p=726673#post726673
with great interest as proper sharpening has been a technique that I constantly strive to improve my skills on.
Thanks to all who have contributed to that thread, and I replied with this post and it was recommended to me that I duplicate it here as a new thread (thx Pathfinder).
I'm using PS CS3 and have been using USM sharpening from back in the CS days and am trying to utilize CS3's Smart Sharpening capabilities to perhaps achieve better output if possible.
Most of the information about Smart Sharpening on the web seems very generic and basic, and since the USM info in this thread is so very detailed and useful, I thought I'd see if anyone has similar expertise with CS3's (or CS2) Smart Sharpening.
Does anyone have any suggestions on how to apply the information in this thread using Smart Sharpening instead of USM since the options in the dialog boxes are somewhat different? Or is USM still the preferred method of sharpening to most everyone here?
If Smart Sharpening is not a good thing as opposed to the standard USM sharpening, why not? (Please be specific as why not, which would be very helpful).
Also, does anyone who is using Smart Sharpening have any favorite preferences in the Amount & Radius settings, etc.?
My primary "genre" is Landscapes, so that's what I usually apply my sharpening to as opposed to portraits, people, etc. and I've found that an Amount of around 50-75%, Radius of 1 and Remove Lens Blur sharpens things up, but I can't really come up with what I think should be the "perfect" numbers. I'm looking for the elusive "Holy Grail" and thought someone here might have some suggestions, even though the "Holy Grail" will probably never be realistically obtainable, but if I can get closer than where I'm at now, then I'll be happy. (Photos that I'm sharpening are 3872x2592).
Thanks for your help.
I've read this very informative thread:
http://www.dgrin.com/showthread.php?p=726673#post726673
with great interest as proper sharpening has been a technique that I constantly strive to improve my skills on.
Thanks to all who have contributed to that thread, and I replied with this post and it was recommended to me that I duplicate it here as a new thread (thx Pathfinder).
I'm using PS CS3 and have been using USM sharpening from back in the CS days and am trying to utilize CS3's Smart Sharpening capabilities to perhaps achieve better output if possible.
Most of the information about Smart Sharpening on the web seems very generic and basic, and since the USM info in this thread is so very detailed and useful, I thought I'd see if anyone has similar expertise with CS3's (or CS2) Smart Sharpening.
Does anyone have any suggestions on how to apply the information in this thread using Smart Sharpening instead of USM since the options in the dialog boxes are somewhat different? Or is USM still the preferred method of sharpening to most everyone here?
If Smart Sharpening is not a good thing as opposed to the standard USM sharpening, why not? (Please be specific as why not, which would be very helpful).
Also, does anyone who is using Smart Sharpening have any favorite preferences in the Amount & Radius settings, etc.?
My primary "genre" is Landscapes, so that's what I usually apply my sharpening to as opposed to portraits, people, etc. and I've found that an Amount of around 50-75%, Radius of 1 and Remove Lens Blur sharpens things up, but I can't really come up with what I think should be the "perfect" numbers. I'm looking for the elusive "Holy Grail" and thought someone here might have some suggestions, even though the "Holy Grail" will probably never be realistically obtainable, but if I can get closer than where I'm at now, then I'll be happy. (Photos that I'm sharpening are 3872x2592).
Thanks for your help.
0
Comments
Thanks for reposting this here. I will be interested to hear if anyone uses Smart Sharpen routinely.
I have used it off and on ever since it was released, trying to see if it would offer me more than USM in its various modes does. I hoped it would help with camera movement via the Motion blur setting, on those images we know we should deep six, but can't bear to part with for other reasons. But neither Motion blur, Gaussian Blur, or Lens Blur has ever really caught on with me. I use a 2.5Mhz dual G5, so the Smart Sharpen filter can take several minutes with a 16Mpxl file from a 1DsMkll. Maybe a new quad core Mac will make me more favorable.
I find I prefer to sharpen with USM on an adjustment layer, and then either mask out where I don't want sharpening, and finish by adjusting the Opacity slider as needed. Most of my Capture sharpening is done now in Adobe Raw 4.3. I find that the sliders and masking available in ARC 4.3 is extremely valuable and do it routinely. That and correcting for chromatic aberration also.
Capture sharpening is global editing, but sharpening for output in Photoshop frequently is local and needs to be done selectively - Smart Sharpen can be done on a layer for that purpose also of course.
Moderator of the Technique Forum and Finishing School on Dgrin
I started experimenting with Smart Sharpen after I upgraded to CS3 because I thought that since Adobe must have spent a lot of R&D time and effort and dollars creating it that it must have something good to offer over USM and I'm not "getting my money's worth" by not using it.... So I'm trying to give it one heck of a chance to prove itself useful.
I have found that with my Windows XP P4 2.4 ghz 1.5mb RAM computer, Smart Sharpen takes just a little longer to process a 10 mpx file than does an action converting to LAB, USM then back to RGB, so speed isn't too much of an issue.
Hopefully we'll hear from others with some ideas on the issue.
http://cafields.com
I haven't tried any Hiraloam techniques with smart sharpen, so can't comment on that. Also, I haven't experimented too much with the blur types in smart sharpen. I've stuck with Lens Blur. I think Gaussian Blur gives basically the same effect as USM. Lens Blus gives a different approach.
I wish I could be more help, but I just haven't systematically examined the differences.
Duffy
Hopefully you're doing this in 16-bit, otherwise, the 2nd option is causing a lot more data loss than the first!
Author "Color Management for Photographers"
http://www.digitaldog.net/
Is it an issue with the rgb-LAB-rgb conversion?
Natural selection is responsible for every living thing that exists.
D3s, D500, D5300, and way more glass than the wife knows about.
In 16-bit, not an issue. In 8-bit, big issue! Forgive the copy and paste:
Editing in LAB: I have nothing against the LAB color model. However, there are a group of
people who feel that editing in LAB is the only way to accomplish specific corrections, making
it sound like a macho editing space. It is true, there are a few correction techniques that
rely on a document being in LAB color space.The question becomes whether it’s worth taking
the time or worse, producing image degradation to convert from a working space to LAB and
back. Every time a conversion to LAB is produced, the rounding errors and severe gamut mis-
match between the two spaces can account for data loss, known as quantization errors. The
amount of data loss depends on the original gamut size and gamma of the working space.For
example, if the working space is Adobe RGB, which has 256 values available, converting to 8-
bit LAB reduces the data down to 234 values.The net result is a loss of 22 levels. Doing the
same conversions from ProPhoto RGB reduces the data to only 225 values,producing a loss of
31 levels.
Bruce Lindbloom, a well-respected color geek and scientist, has a very useful Levels Cal-
culator,which allows you to enter values to determine the actual number of levels lost to quan-
tization (see the “Calc page”at http://www.brucelindbloom.com). If you do decide to convert
into and out of LAB, do so on a high-bit (16-bit per channel) document.
Another problem with LAB is that it has a huge color gamut, and if you’re working in 24-
bit images, you have three channels, each containing 256 possible values. With large gamut
color spaces, these 256 data points are spread further apart than a smaller gamut space,
which can result in banding in certain kinds of imagery like smooth blue skies. LAB is derived
from CIE XYZ (1931), which defines human perception; it represents all the colors we can
see and its gamut is huge. Not as many tools or operations in Photoshop can operate on
images in LAB and the numeric scale isn’t very intuitive for users to work with. Some applica-
tions we might want to work with can’t accept a LAB file so an additional conversion is usually
necessary.
One advantage of LAB is that since the colors (the A*axis and B*axis) are separate
from the Lightness (L*axis),it is possible to conduct tonal corrections that do not affect color.
Hue shifts are avoided when changing lightness. The other advantage of LAB is that it is
self-defining, meaning we don’t need an embedded profile for files in LAB since LAB is truly
a device-independent color space. Some editing techniques can be conducted on an RGB file
and produce nearly identical results of using LAB by using the Luminance blend mode in
Photoshop.For example,instead of converting a file to LAB to sharpen just the L channel,apply
Unsharp Mask on your RGB file, then under the Edit menu, select the Fadecommand and
set the mode pop-up menu to Luminosity. Or apply the Unsharp Mask on a layer and set its
blend mode to Luminosity.This will produce the same qualities as sharpening on the L channel
without having to do a color space conversion, plus the Opacityslider is useful for fine-tuning
the effect.
Some users are under the impression that Photoshop does all its conversions to and from
LAB, converting on-the-fly.This is untrue as it would greatly slow down performance. Instead,
Photoshop uses LAB as a reference when conducting many operations. Photoshop is not actu-
ally converting pixel data between color spaces unless you,the user,actually ask for this.None
of these issues should be interpreted as implying that a conversion from working space to LAB
is bad. Just be aware of the issues involved with this kind of conversion and whenever possi-
ble, try to use similar techniques that can be conducted in the RGB working space.
Author "Color Management for Photographers"
http://www.digitaldog.net/
For fun, take a regular picture in AdobeRGB or sRGB and perform a series of well defined edits on it in 8 bit space. Then do the same series of edits on another copy of the photo, but after every edit, convert the photo to LAB and then back to the original color space. Compare the end products in a print, or at 100% view. If you see any serious loss in the one with all the LAB conversions, then by all means, be afraid of converting to LAB. I've done this, and I haven't been able to see any difference at all.
If you are working in ProPhoto, which may be an excellent idea, then you should be editing in 16 bits anyways.
BTW, this is an argument that has been going on for a long time, and I'm pretty sure that Andrew is going to disagree with me. All I can say is that I've done the repeated conversion tests, and I can't see any harm in doing the conversions. Maybe in some pictures there is some. And I don't intend to get caught in this argument again. If you are interested, try the experiment yourself and draw your own conclusions.
Duffy
As you say Duffy, it is pretty rare to see any visible change on the monitor, but as we know, there are colours outside of the monitor gamut so the visual test is not always best. It is rare, but not impossible that one does see a minor visual difference between the RGB and Lab image.
The other issue being discussed is the "invisible" degradation to the file. There are tonal levels and unique colours lost in the conversion. There are various methods used to verify this, with difference blend tests and statistical analysis being the two most common general methods.
The Hue/Saturation command in Photoshop is lossy too, just not as much as in RGB to Photoshop Lab mode. I presume that when the Hue/Saturation dialog box is opened, that behind the scenes the image is converted on the fly to HSL space, which is not a lossless move. This degradation is cumulative each time the Hue/Saturation dialog is used (even at zero values, if one hits OK instead of cancel, there is data loss). I have no references for this claim, I have never seen it mentioned before - but my research shows that unlike all other image/adjustments Hue/Saturation is lossy when zero values are applied (which is one reason why there is a cancel button).
Stephen Marsh.
http://members.ozemail.com.au/~binaryfx/
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
USM has been around longer, so there is more detailed info on it than Smart Sharpen. Even though different under the hood, final results are similar enough that one can use most of the USM techniques with SS.
Before Adobe implemented SS in Photoshop, I had high expectatoins of deconvolution, but Smart Sharpen introduced in CS2 was not really what I was looking for, it is too similar to USM, I wanted something more like this:
http://www.bialith.com/Research/BARclockblur.htm
The options are somewhat different, but amount and radius are similar enough in general principle. I read once that the more accurate function sharpens the image with 2 iterations (sharpens twice). From memory, some images accept the more accurate option more gracefully than others, so one will likely need to see on an image by image basis what effect this option has.
I sometimes use if for acquisition or capture sharpening, comparing it to USM to see if any is "better" than the other...Photoshop USM is pretty damn good, even more so with all the tricks that one can use. For output, I still use USM as I use the threshold option (and or masks, blend if).
Bruce Fraser was mostly dismissive of SS in his sharpening book, although he did give it in-depth coverage. If I remember things correctly, he stuck with USM and or High Pass sharpening (and masks, blend if etc).
Deconvolution should be great for working with out of focus or motion blurred images (if an accurate PSF can be obtained). This is often easier said than done.
It will depend on the pixel dimensions of the image, the image content, noise/grain etc.
Even knowing these details, there is no single answer or "Holy Grail" of values for every image, although there are various general rules that one can learn to apply and with experience alter the basic rule of thumb for the image and output and viewing distance at hand (if it demands such attention).
MS Windows OS users may like to have a look at clunky little free appliction that works in 32 bpc HSI mode which performs Van Cittert deconvolution on the intensity channel and not the chroma channels. I generally just stick to my USM tricks, but sometimes this tool is worth it:
http://members.ozemail.com.au/~binaryfx/sharpcontrol.html
Sincerely,
Stephen Marsh
http://members.ozemail.com.au/~binaryfx
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
Well unlike Dan, I have files that illustrate the damage that anyone willing can try. On my Public iDisk is a folder called 16-bit challenge. You'll find a Raw along with XMP settings to illustrate this OR you can simply download a small crop and apply an action found there which works in both 8-bit and then 16-bit. You can visually see the differences. And these edits are not extreme (and even if they were, that's not part of his so called challenge, the rules vacillate as he sees fit*), I can only imagine what would happen with further editing. And that's the issue, you never know what edits you may apply in the future or the quality of the output device you may send it to in the future.
I placed this on the site because Dan first said no one has produced files that illustrate a 16-bit advantage. This was back in 2005. When he saw these examples, that's when he want on the rant about the evils of "Ultra-Wide gamut spaces" (another new term he made up) and that what I illustrated was due to the use of ProPhoto RGB. You may say, fair enough but that space is required for many of us who want to encode data we can capture and can output. So instead of saying "OK, 16-bit challenge is now restricted to non ultra wide gamut working space" (whatever that means), he dismissed its usefulness with such spaces and tried to convince his list that such spaces are not useful etc. More of his usual distractions.
Then I illustrated that using ANY working space but ProPhoto RGB in Camera Raw on a pretty wide gamut image would introduce more damage. That's in a folder called ProPhoto vs. Adobe RGB (1998). You render the image in Camera Raw (you can use the XMP instructions in the DNG**) and encode in Camera Raw in sRGB, Adobe RGB and then ProPhoto RGB. Now you have three identical appearing images, all in 16-bit but in three different working spaces. Now make an Adjustment Layer of Hue Saturation and up it a small amount (say +7-8). Zoom in at 100% of all three and examine the yellow flower in various areas, you'll see degradation in all the images BUT ProPhoto RGB. Due to the wide gamut nature of this capture, the only space that doesn't produce issues IS ProPhoto RGB. And as you toggle the color spaces in ACR, examine the clipping of Saturation in the Histogram. You'll see that the only encoding space that can contain the captured and rendered colors is, surprise, ProPhoto RGB.
So, here are two examples of why we need wide gamut spaces and why we need high bit editing capabilities. But of course, the engineers at Adobe don't know that this is all folly (according to Dan).
As to the saturation tweak in Photoshop and the damage produced versus just using something like Vibrance in ACR, well yes, this illustrates another fallacy of Dan's, that ACR is "Unfit for professional use" (I have the entire quote if you wish) and that anyone who knows what they are doing can correct a JPEG better and faster than someone in ACR. Of course, I disagree (and challenged him to prove this on stage with me, which he ignored). This illustrates why we should be doing such edits at the Raw conversion stage, in wide gamut, in high bit (the only way ACR and LR work). But again, Dan knows better than the engineers of these products too.
*http://www.brucelindbloom.com/index.html?DanMargulis.html
**Dan originally dismissed this as unfair because I uploaded a DNG and he said that wasn't the Raw file until a few others on the list corrected him (he didn't now what a DNG was). After that, in typical fashion, he simply dismissed and ignored the discussion.
As for the location of the files:
My public iDisk:
thedigitaldog
Name (lower case) public
Password (lower case) public
Public folder Password is "public" (note the first letter is NOT capitalized).
To go there via a web browser, use this URL:
http://idisk.mac.com/thedigitaldog-Public
Author "Color Management for Photographers"
http://www.digitaldog.net/
I wasn't talking about the advantages of 16 bit editing over 8 bit editing. I agree that there are advantages to 16 bit editing, especially in a wide gamut space. And I also think the only disadvantages are the slower performance on a system that is not up to speed, and the extra storage space one might need.
The question I was talking about was the destructive effects of a move from RGB to LAB and back to RGB. This is also an issue that Dan addresses, so I can understand the confusion. I've done some experiments where I did RGB adjustments to a file and printed it, then did the same adjustments but converted to LAB and back after each adjustment (about 15-20 times in all for some files). I did these tests in 8 bit, and could not tell the difference between the images at 100$ crop on the monitor, or from sample prints. That satisfied me that there is minimal danger, for my purposes, in doing round trips to LAB.
It may be that there are situations where the round trip to LAB matters, and I agree that in principle it is better to avoid a conversion if you can get to your endpoint as easily without making any conversions. But I don't agree that any conversion to LAB is a fearsome thing, even if you are editing in 8 bits.
And I'm not interested in turning this thread into a food fight over some of Dan's more controversial ideas. (After working with the new ACR for a while, I agree with you that its a very useful tool, at least for me. I'm not prepared to say, however, that I can do better with ACR than Dan could do with PS. And while I agree with you about ACR, your point about Hue/Saturation doesn't disprove Dan's point, because Hue/Saturation is not a type of adjustment that he ever advocates using, at least in the three books of his that I've read.)
Duffy
Well Dan doesn't agree with either of us. And while one can move from RGB to Lab and back in high bit without suffering the same losses as doing it in 8-bit, the question becomes, what's it bring to the party? There are some useful techniques (assuming you botched the Raw rendering and just have to "fix" this in Photoshop). Converting to Lab just to sharpen the L channel is both time consuming, ineffective (when we have luminosity blending) and also causes data loss discussed here.
That's probably true. But we don't know when in the future, an edit takes the data over the top and does cause damage. Or what happens when we send that data to a newer device that may illustrate banding (and the image content too). Look at the differences in dither between an Epson ink jet circa 2003 and 2008. Imagine what we'll have in just 2 years (If I told you, Epson would have me killed. But you WILL see increasingly finer dither).
I don't either as long as users understand what penitentially are the issues. But the fellow who asks us to use Lab to do things we can do in a Raw converter or in the encoded color space would like you to believe there's nothing high bit or high gamut spaces do that's useful. This is clearly incorrect.
Author "Color Management for Photographers"
http://www.digitaldog.net/
So, if there's a shot I want to print on a new device in a couple of years, I fully expect that I will start over with the originals and the best tools available to me. Thus, the number of times I went back and forth to LAB sometime in the past is (at least for me) quite irrelevant.
Otherwise, I think we basically agree. I would point out that there are some special circumstances where a sharpen in RGB with luminosity mode will give white halos that are unattractive, while the same sharpen in LAB's L channel will give lightly colored halos that are less offensive. Admittedly, the number of shots where this occurs are very few, and you could always just go back and do the sharpen in LAB when the problem arises. But the same could be said for situations where excess conversions lead one into hazerdous waters.
Duffy
Doesn't seem like that long ago, does it?
I'd also like to say that I've gained much more useful information on DGrin than any other forum that I've lurked on and I appreciate the knowledge and expertise of the members here. Thanks DGrin.
I've read the very interesting in-depth discussion here and have gleaned some things that I will try to implement in my sharpening experimentations.
It looks like Smart Sharpening is not going to be the "Holy Grail" which we seek but can never find.
Andrew's statement here sounds like a good method that I'll try: My use of LAB sharpening was to reduce the halo and noise, and I'll try the suggested method as an alternative. Also, in answer to a previous reply, I sharpen in 16 bit whenever possible, except for web sharpening after reduction and compression, or unless I'm processing a .jpg file.
Regarding 16 bit vs 8 bit sharpening, is there a recommendation to convert an 8 bit .jpg to 16 bit for sharpening, or just leave it as 8 bit? Most of the time my preference is to process RAW via Lightroom then save as .PSD and finish processing with CS3, but sometimes I'll process a .jpg, which I save as an 8 bit TIFF to avoid loss.
BinaryFx, thanks for addressing each of my questions regarding Smart Sharpening separately and thanks for your referral of the SharpControl program. I'll be trying that out very soon.
I've also downloaded the TLR Sharpening Toolkit from The Lights Right Studio and am experimenting with it's results. Does anyone here have any experience with it for sharpening?
Thanks again.
http://cafields.com
Leave the master in 16-bit, make a copy as a JPEG for those needs. Once you convert from 16-bit to 8-bit, there's no going back (the data loss is permanent). And going backwards buys you nothing, so if you have an 8-bit JPEG, leave it as is.
The beauty of LR is you can render out iterations so easily so you can export an 8-bit JPEG in say sRGB for the web, but a high bit wider gamut for print needs. OR do all the work on the high bit master, save a copy as a JPEG to whatever size you wish.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Thanks for the thanks CAFields, as threads sometimes degenerate, I wanted to at least try to answer as many questions in the OP as I could, as early in the thread as I could!
The results of the Van Cittert deconvolution used in SharpControl appear to be very similar to traditional USM results (perhaps this is what SS uses?). The other methods in the previous link do a better job.
A couple of other similar commercial products include:
http://www.fixerlabs.com/EN/photoshop_plugins/focusfixer.htm
http://www.focusmagic.com/
Best,
Stephen Marsh.
http://members.ozemail.com.au/~binaryfx/
http://members.ozemail.com.au/~binaryfx/
http://prepression.blogspot.com/
Author "Color Management for Photographers"
http://www.digitaldog.net/
Eample: RGB-to-Lab Quantization Loss
http://www.brucelindbloom.com/index.html?RGB16Million.html
Author "Color Management for Photographers"
http://www.digitaldog.net/
I'd like to see what the conversion data loss is on a 16-bit image. I never go to LAB in 8-bits. I don't consider myself a LAB junky like some, but it can be very useful for certain types of operations (particularly when you want to use blend-if on colors rather than create a complicated mask) and I always make sure I'm in 16-bits before doing so.
FYI, I took his image, loaded it into Photoshop, assigned sRGB, duplicated it, converted duplicate to LAB, converted back to sRGB, then copied into a new layer over the original and set the blend mode to difference. While you can clearly see the color loss in a changed histogram, when I set the blend mode to difference, I cannot see any visual difference, even at 1000%. If his numbers are right for 8-bit LAB conversions, I was expecting to see such a drastic (87%) color loss visually, but I haven't found a way to see it yet.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
Here's what I'd suggest you do to see the differences:
In Photoshop, open both images.
Hold down the Option or Alt key, and go to Image > Apply Image.
Set whichever image isn't listed as the target as the source. Set the Channel as RGB. Set the Blending to Subtract, with an Opacity of 100, a Scale of 1, and an Offset of 128.
If the images were truly identical, every pixel in the image would be a solid level 128 gray. Pixels that aren't level 128 gray are different by the amount they depart from 128 gray. You can use Levels to exaggerate the difference, which makes patterns easier to see.
Author "Color Management for Photographers"
http://www.digitaldog.net/
I am interested in this image and the quantization loss, but I am unable to duplicate the loss. I used PS2 and color settings of Relative Colorimetric and the ACE engine.
Converting the base image to LAB space and then back to RGB and saving using TIF/LZW conpression I am still detecting 16 million colors.
Converting the base image to sRGB, then converting to LAB and back to RGB had no additional effect (still 16M colors.)
My default color working space is sRGB if that matters.
Am I missing something? (Probably)
Moderator of the Cameras and Accessories forums
Detecting how?
Author "Color Management for Photographers"
http://www.digitaldog.net/
With 8-bit images, I can see the delta after the LAB roundtrip using this method. At 100%, it's hard to see with the naked eye, but if you either zoom in a lot or exaggerate it with levels, you can see it easily. I'm not sure if the difference would show in a print, but certainly if you were going to do more editing I would think you wouldn't want to have this color degradation.
If I do the RGB/LAB conversions in 16-bits, there is almost no difference at all. You can see a very tiny difference, but it's very, very small compared to the 8-bit example. It looks very safe to me. In fact, it's less than a roundtrip 8-bit conversion from sRGB to AdobeRGB and back to sRGB.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
For these I used IrfanView and its "Image information", ... and I am a doofus.
I just noticed that I was looking at the wrong information.
OK, looking at the "right" information I still wind up with better results than that link.
The base image has 16,777,216 unique colors.
After converting the base image to sRGB, then converting to LAB and back to RGB (sRGB?) the yield is 5,133,185 colors.
Interestingly, if I upres the image 200 percent prior to the color space conversion the loss is apparently reduced somewhat to 6,486,579 colors (unless I messed up again).
Moderator of the Cameras and Accessories forums
I see it after the round trip and using the Apply Image trick. Then, you go into Levels and you can see in the middle, the areas of greatest loss. Move the outer sliders inward to the areas and you'll see all the areas that, as I mentioned differ from 128. Zoom in at 100% (those colors are pretty small).
Note too the role of Dither! I see the damage with or without, but dither should 'smooth' out the effects a tad.
Author "Color Management for Photographers"
http://www.digitaldog.net/