Andrew—your knowledge and credentials are obvious and need no qualification, but forgive me if I say that you sound like you're all strategy and no tactics. Anyone who can acknowledge that "accurate color" will, nonetheless, be ugly color that pleases no one, yet not realize that the argument is essentially meaningless beyond that point, is mistaking theory for results.
OK let me try again to see if I can make this clear.
There’s only one way to really define accurate color just as there’s only one way to define accurate distance, speed etc. You measure it. Anything else is someone’s interpretation of accuracy and all we can do is agree or disagree. There’s no way to say “but this is the measured color of what you placed in front of the camera”. If I measure the color, illuminant and dynamic range of a scene AND I send those numbers to your output device (a display) it not only doesn’t look like what you saw but looks pretty ugly. That is due to the issues known as the reference medium. Your display has a fixed gamut, contrast ratio and expects output, not scene referred colorimetry. The ICC paper spells this out, it was co written by a color scientist. If you want to take up the issue that accurate scene colorimetry doesn’t look correct output referred on the device (using this fellows examples), you two can debate this. And you can debate the earth is flat or the moon is made of cheese. Scientific analysis would prove you wrong.
Pleasing color is the color on some output device that you, or I or anyone likes. We can’t measure that and compare it to the scene to gauge accuracy because the reference media is totally different, as are a host of other factors. You can say that this display or print is showing you the color exactly as you recall seeing it but there’s no way to measure or gauge if what you believe to be true is or isn’t. So thus far, the debate, if there is one, is that using the term accurate color is bested reserved for describing the color of the scene and pleasing color is bested reserved for describing the numbers you produce when you get the color appearance you desire. If you want to call that accurate, OK but I think its sloppy and it doesn’t allow anyone any way to gauge if this is the case or not. If you like huge fudge factors in describing what it is you’re doing with a big pile of numbers on a computer, so be it.
We send all kinds of different numbers to devices in an attempt to produce matching color appearance and thanks to metameric matches, this is achieved every day.
I've been producing color for magazine covers, photospreads, photographers and web sites, for nearly 20 years, and doing it well. I've plied my craft in shops that spit on the notion of color management, and I've done it in shops that worship the concept like it was the salvation of western civilization. I'm curently working in an environment that typifies most such environments: they pay lip service to the idea but really don't do it well.
And your point is what in reference to the discussion?
When we acquired a new proofing system a while back, our production wizard ordered up a proof from our old provider to compare with ours. Shock of shocks, they didn't match. So he emarked on a city wide quest, ordering contract proofs from about a dozen different services. All were different. There's nothing more pathetic than a so-called production wizard staring at a pile of proofs that don't match and wailing "Which one is correct?" I told him "The one with the client's initials in the lower right corner, the one a printer contracts to match on press. That's your correct color." That's tactics.
Other than what you say doesn’t surprise me in the least, again, what’s your point in the context of the current discussion? If you want to discuss the need for output specific standards like G7 or TR001 in the print industry I’m ready. If you want to admit that process control and press standards are not well implemented in the US, I’m with you and find the above comments about your proofing experiences in line with mine.
Rutt makes a passionate case for Dan Margulis' approach, by the numbers, you make a passionate case for dismissing Rutt's case, also based on numbers... numbers, schmumbers.
No I didn’t say that and in fact discussed where using numbers IS important and useful! But the question becomes, what numbers, based on what and to accomplish what?
What we have here are photographers looking for tactics and theory be damned. Personally, I don't believe you fail to understand the point Rutt made in his last post; you're too smart. Talk all you want about this space or that profile, in the real world, when you need results, taking a look at the numbers and noting some general patterns about relationships between the different channels, is not only warranted, to refuse to do so is amateurish and to deny the usefulness ceases to be an artistic, or scientific, argument, and becomes mere politics.
Is someone attacking you because again, your above point doesn’t make any sense in the context of the discussion.
The myth of color management rests on the notion that there exists, somewhere, a device independent definition of color from which all others can be measured, and which can serve as a standard for translating from space to space. The fatal flaw is that by definition, we have no access to that mythical color save through a, you guess it, device. So it becomes an act of faith, not science, to talk about accurate color. It's all a compromise, a quest for pleasing color. There is no other. The only question is who you please, and usually that's the guy that signs the check.
First, what basis do you apply to call this a myth? 2nd, the color model Dan now loves over CMYK, LAB IS based on human vision and the numbers define color appearance based on human perception WHEN talking about single patches of color (NOT color in context since there are no color appearance models at work here. If there were, the two patches in the optical illusion I illustrated would be accounted for). There does exist several device independent mathematically created spaces that define all colors humans can see. You can measure the colors and produce these values in a very non ambiguous way, just as you can use your speedometer in your car to gauge how fast you are moving. And I don’t know who has said this is an act of faith, other than you above. So again, most of your points are lost on me because I think you are trying to defend someone else or some other belief system but you’re not thus far doing so with anything other than emotion.
Other than apparently defending Rutt (who needs no defending) or Dan (who does), what’s your point, what’s on your mind?
No I didn’t say that and in fact discussed where using numbers IS important and useful! But the question becomes, what numbers, based on what and to accomplish what?
I guess I missed that. Measuring for neutrality? Or something else?
The measurements I make are not very fine at all. I mean that I don't pay that much attention to the exact numbers, only to some gross relationships. I find LAB measurements convenient because it's easy for me to see if something is green or blue instead of magenta or yellow (for example). Neutrality stands out to me because it's 0. I can tell if flesh is more magenta than yellow at a glance. No big deal. I'm sure I could learn to do this in RGB or CMYK or whatever. It just happens that I find the LAB numbers easy to use. So that is the answer to the questions what numbers.
As to what I want to accomplish with these numbers, it's detecting casts which when corrected will represent an improvement. I have some theories about when this is, but as I said before, this thread is about your ideas, not mine or Dan's or anyone else's (at least that's my goal.) I've learned that by doing this kind of measuring I can often detect the possibility of such an improvement when I wouldn't have done so otherwise.
So for me this is the single biggest question for you. Do you never do this and just trust your eyes and good color management all the time? If so, did you have to train yourself somehow to see what I don't seem to be able to see? Do you think I can learn it?
I guess I missed that. Measuring for neutrality? Or something else?
Neutrality, black and white clipping, saturation clipping of all colors.
I find LAB measurements convenient because it's easy for me to see if something is green or blue instead of magenta or yellow (for example). Neutrality stands out to me because it's 0.
A cousin of LAB and a much easier scale to use is based on LCH which I wish Adobe would add to the info palette. Chroma is where you'd look for neutrality (zero is neutral, 100 fully saturated). Its the aStar and bStar portions of LAB which are not at all intuitive but with LCH, color is defined using H for Hue (zero to 360, makes sense when you look at a hue wheel).
It is a lot easier to initially spot the degree of neutrality with a single value but again, in all RGB working spaces, neutral is R=G=B.
I can tell if flesh is more magenta than yellow at a glance. No big deal. I'm sure I could learn to do this in RGB or CMYK or whatever. It just happens that I find the LAB numbers easy to use. So that is the answer to the questions what numbers.
The method you're using is sound, the color space isn't really (my original point in the other forum post that started all this). CMYK is a very specific output color space, the numbers can vastly change from one space to the other. LAB makes MUCH more sense in this respect. Its based on human color vision and its self defining unlike RGB (with RGB, you can't provide numbers without an associated color space which is the same with CMYK).
So for me this is the single biggest question for you. Do you never do this and just trust your eyes and good color management all the time? If so, did you have to train yourself somehow to see what I don't seem to be able to see? Do you think I can learn it?
I use numbers yes but I use what I see first and foremost because again, you're really just getting numeric values of one or a group of solid colors and very often, you're not getting the colors in context with the rest of the image which is important. So, knowing the values I clip shadows and highlights is useful, I don't always want to clip that data (sometimes I do). Knowing that a color is maxed out in saturation is useful. If I'm looking at skin tone, I want to see that part of the image in context with all the thousands of other similar and dissimilar colored pixels.
Ultimately the proof is in the print but I'd like to see that color appearance on screen before I click the Print button. If a lot of my work was solid colors (pie charts), working BTN would be far more effective but like the example of the cement and building shot at night, simply color correcting because you're produced a neutral isn't effective.
Try this, use the Set neutral (WB) eyedroppers in ACR, LR or Photoshop. Every hunted around, clicking and finding you didn't like the overall color? Yet every time you clicked, you did produce a neutral. The key is having an image where the neutrals should indeed be neutral (that's not going to fly on a sunset image). So its not just clicking on the right pixels to define a neutral, its looking at the entire image and deciding if the correction works for you or doesn't (again, neutral or case, neither is accurate but the right answer IS pleasing).
Yes, I agree, Dan uses this term in two different ways.
To describe LAB colors such as L=0, A=0, B=-128. We could discuss the semantics of the word, "impossible" as applied to these, but it's not what I mean in this instance, so it would be a digression.
To describe evidence of a cast. For example human faces which measure negative in either A and/or B, midday skies which measure strongly negative in A, wedding dresses no neutral. This is what I was trying to describe. As I said, I apologize for using Dan's language, but he taught me to do this and it has proved to be a useful thing to do.
Actually, for the Lab = 0,0,128, Dan uses the term "imaginary," instead of "impossible." For Dan, and I think for what you are talking about here, an impossible color is one that can be displayed, but does not actually occur anywhere under white light: examples are blue fur, green skin, purple grass, yellow skies, red Union army uniforms, etc... In this sense, impossible is something that falls on the extreme end of implausibility.
Actually, for the Lab = 0,0,128, Dan uses the term "imaginary," instead of "impossible." For Dan, and I think for what you are talking about here, an impossible color is one that can be displayed, but does not actually occur anywhere under white light: examples are blue fur, green skin, purple grass, yellow skies, red Union army uniforms, etc... In this sense, impossible is something that falls on the extreme end of implausibility.
Duffy
Imaginary, impossible, what's he trying to say here?
Displayed can be output; the gamut of our displays is roughly sRGB in size, pretty small. And what output device is he referring to? CMYK SWOP ink on paper? Every output device on the planet? Implausible based on who's critera? Its a silly term he made up that doesn't mean anything.
I have seen images of blue fur and green skin (how about those crazy kids at Football games or someone on stage?). The entire concept is asinine.
Imaginary, impossible, what's he trying to say here?
Displayed can be output; the gamut of our displays is roughly sRGB in size, pretty small. And what output device is he referring to? CMYK SWOP ink on paper? Every output device on the planet? Implausible based on who's critera? Its a silly term he made up that doesn't mean anything.
I have seen images of blue fur and green skin (how about those crazy kids at Football games or someone on stage?). The entire concept is asinine.
I've been standing on the sidelines here, but this is just melarkey. The concept of colors that don't occur naturally in nature (we're not talking about crazy kids with green paint on their faces, we're talking about natural skin color or natural fur color) is a real and useful concept to understand and know how to use. I personally find it useful and it can definitely help identify subtle color casts in many contexts and give you more specific direction for how to remove it.
You can say you don't personally find it useful, but many do, so calling the entire concept asinine is just showing how irrational you are being and these extreme positions about anything Margulis says are distracting from the times when you do have a useful point to make.
I know this concept is useful to me so I'm not going to bother to argue that with you. If you want to offer something better to replace it with that isn't just looking at the image with my eyes, I am open to new and improved ways of doing things.
The concept of colors that don't occur naturally in nature (we're not talking about crazy kids with green paint on their faces, we're talking about natural skin color or natural fur color) is a real and useful concept to understand and know how to use.
What's asinine is making up a term like impossible color when in fact, the color is real, can be captured, measured, defined numerically and output. Lots of these colors are natural. What's asinine is making up a term that doesn't mean anything, isn't at all well defined.
Think about the term. Impossible color. Well there are wavelets of light that fall outside human vision. I can't see anything beyond the 700nm or below say 400nm so OK, those are invisible, impossible colors that maybe a dog or fish can see. Again, its real easy to make up this stuff so you sound important, but explain what you're saying here. A color I CAN capture is impossible? Prove it. And what makes the term (which really sounds totally bogus to me) important?
You can say you don't personally find it useful, but many do, so calling the entire concept asinine is just showing how irrational you are being and these extreme positions about anything Margulis says are distracting from the times when you do have a useful point to make.
Wow, lets be clear, what's asinine is the term as described to me. I didn't say the entire concept is asinine.
I know this concept is useful to me so I'm not going to bother to argue that with you.
OK so define what is an impossible color that can be defined using our 24-bit systems and how calling this color impossible is useful.
As Dan uses it, the term "impossible color" doesn't refer to the color itself. The term is a shorthand, and within Dan's writings, its something of a piece of jargon. It's a way of speaking when doing his kind of color correction.
You may not like the term. But many, many people both understand what he is talking about and find it useful. As such, the idea behind the term is both meaningful and useful (to many), no matter how asinine you think it sounds. (BTW, I think the term "output-referred" sounds at least as awful, but since I understand what you mean by it, I won't object too hard to your use of it, even though I would never ever use the term myself.)
I also have a hard time believing that you don't understand the idea behind the term. If someone shows you a picture of a strawberry, and its stronger in the A channel than the B channel, that's a clue that there's something wrong. If it shows negative in the B channel, then there's something even further amiss. Strawberries aren't purple. I realize that's not a scientific statement, but I think its meaningful and something that most people can understand and use.
Now go to Rutt's question. What happens when many people's eyes don't spot these anomolies? That does happen -- I've seen it again and again on various forums. People think colors look just fine. Then someone plays with the pictures, finds that something was off, and fixes it based on the discovery. It's not uncommon for everyone to then agree that the picture is now better. With your approach, how would you make this sort of improvement?
Honestly, the blues to the left of center are bluer than any sky I have ever seen. I'm not sure I would say that they are impossible (in Dan's sense) but they are certainly implausible. I look at a picture like that as having highly exaggerated colors, at least in the skies. More to the point, I think people might agree or disagree with me, but it would be a rare bird who would say "exaggerated colors? I have absolutely no idea what you are talking about."
The method you're using is sound, the color space isn't really (my original point in the other forum post that started all this). CMYK is a very specific output color space, the numbers can vastly change from one space to the other. LAB makes MUCH more sense in this respect. Its based on human color vision and its self defining unlike RGB (with RGB, you can't provide numbers without an associated color space which is the same with CMYK).
I must not have been clear. I use LAB not CMYK for this.
Try this, use the Set neutral (WB) eyedroppers in ACR, LR or Photoshop. Every hunted around, clicking and finding you didn't like the overall color? Yet every time you clicked, you did produce a neutral. The key is having an image where the neutrals should indeed be neutral (that's not going to fly on a sunset image). So its not just clicking on the right pixels to define a neutral, its looking at the entire image and deciding if the correction works for you or doesn't (again, neutral or case, neither is accurate but the right answer IS pleasing).
Yes, I do this all the time to try to find a good starting point. For the ballet shots, I can even talk to the lighting director to find out the color temperature of the lights he uses. But these days I find that I can do more with flesh tones, skies, snow, fur, water, &etc than with neutrals. When there really are neutrals, it's often the best clue, but if it leads to bad skin tones, then I won't use it.
The second ballet shot (the one with many dancers) is a good example. The dresses are white but the light is very blue. Neutralize the white dresses and the flesh tones are very wrong. Look at it carefully and you'll see the blue light on the faces. What I did was to get some places on the flesh that had A<=B in LAB and the rest looked right. You could see the blue light but also see that the faces were healthy flesh tones. The dresses ended up blue. Looked like what I saw. But I don't think that I (as opposed to someone else) could have done this without measuring and thinking about how it was all supposed to work together. Much more complicated than just setting the WB to make the known neutral dresses neutral.
Is this very different from what you would have done?
As Dan uses it, the term "impossible color" doesn't refer to the color itself. The term is a shorthand, and within Dan's writings, its something of a piece of jargon. It's a way of speaking when doing his kind of color correction.
I've decided that I too should make up terms about color!
Universal color. This is color contained within our universe. Note that you should find out the exact boundaries of our universe as being off a light year here or there doesn't cut it.
Half color. This is color you see when only one eye is open. Try alternating your open and closed eye, do the colors look different?
Vegetable color. Everyone knows what Tomato red looks like right? Or Avocado green. Little reason to measure the color or use a color space and numbers, when we use a vegetable color, everyone knows what we're looking at and seeing.
You may not like the term. But many, many people both understand what he is talking about and find it useful. As such, the idea behind the term is both meaningful and useful (to many), no matter how asinine you think it sounds.
I hope you find my new terms useful too in describing color!
(BTW, I think the term "output-referred" sounds at least as awful, but since I understand what you mean by it, I won't object too hard to your use of it, even though I would never ever use the term myself.)
It doesn't have a nice ring to it, I'd agree. However, ask any color scientist or engineer working in imaging if they know the term and they will. Please Google Output referred then Impossible Color and tell me what you find and referenced where. The ICC, a body of manufacturers and color scientists use the term all over their site when discussing the terms I've attempted to explain here.
I'm not a big fan of Perceptual Rendering but that's what the name is for this gamut compression algorithm. I don't really care for Unsharp Mask, so many people are confused by the Unsharp part but, the term has roots in analog photography and is used as such.
I also have a hard time believing that you don't understand the idea behind the term. If someone shows you a picture of a strawberry, and its stronger in the A channel than the B channel, that's a clue that there's something wrong.
Why don't I just look at it? And what makes this color impossible, clearly it IS possible. Why not say its got a color cast or doesn't look like a strawberry?
If it shows negative in the B channel, then there's something even further amiss. Strawberries aren't purple. I realize that's not a scientific statement, but I think its meaningful and something that most people can understand and use.
No its not and worse, its got an enormous fudge factor since its totally undefined. So what LAB values automatically ensure the color is possible and then not possible?
Now go to Rutt's question. What happens when many people's eyes don't spot these anomolies? That does happen -- I've seen it again and again on various forums.
Is this like the tree falling in the forest when no one is around?
Seriously. You're suggesting that some users see their images and don't find a problem but others do, fix it and original user is amazed. OK, that's useful I guess if we want to edit colors by committee. But it again, doesn't tell us very much about the user. What system did they have, was it calibrated? Did they look at the image and like it, only to find an edit done elsewhere was more pleasing? Was the user happy with every edit? Did it match the original?
People think colors look just fine. Then someone plays with the pictures, finds that something was off, and fixes it based on the discovery. It's not uncommon for everyone to then agree that the picture is now better. With your approach, how would you make this sort of improvement?
Duffy
Nothing you said here is something I'd disagree with. Clearly people find they prefer the color appearance of some images after others mess with them.
I would submit that the first thing to do IS teach them numbers (highlight, shadow, neutrality if necessary and desired), along with a display that produces a reasonably correct preview of the numbers.
I actually submitted an image I shot to the Color Theory list and had others mess with it. But this wasn't a simple bride in a dress or something illuminated under some standard appearing conditions. I submitted the image because its all about interpreting the image, much like the work of Pete Turner (by all means, lay the info palette over his stuff). And to be honest, I didn't like any of the renderings (all done from raw) as much as mine simply because this was a very unusual image of which there are many possible color options (if you want to see the shot, its the fourth on this web gallery: http://digitaldog.net/ARsAmazonPicks/ ). Now some of the renderings were quite interesting and obviously the person making the renderings preferred it.
There's a LOT of interpretation in this game. Numbers only work so well in so many cases. But I'm not dismissing their use. I do question using a CMYK output color space for generating the numbers. I do question making up terms for the club which outside make no practical sense. I mean, impossible colors? How about wrong? No, it doesn't sound sexy but it makes sense. I do question using the term accurate to describe something that can't be measured so this so called accuracy can be defined. You say this color is accurate, I say it isn't. How do we rectify this debate? Well we could actually measure the darn colors.
IF the color numbers you have produce the color on output you desire, OK, that's accurate to a degree but we can't measure this because I can assure you, the color numbers that produced the print you like are absolutely not the same colors you had in front of your camera.
Semantics? Maybe, but good teachers attempt to define terms that have little if any wiggle room should someone question what on earth they are trying to teach you. Making up terms, like Vegetable color might actually get some attention my way, but its not worth it because the term as I'm sure most of you would agree, is silly.
To end this (I really do want to go out and enjoy the 4th) let me leave you with an old Chinese proverb, which says: The first step towards genius is calling things by their proper name.
I must not have been clear. I use LAB not CMYK for this.
Is this very different from what you would have done?
Not really. The interesting bit is when you say you neutralized the dress and the skin was wrong. Wrong visually and/or numerically? IOW, if all is well, both back up your opinion of this rendering. It also shows that BTN (using in this case neutral color on the dress) is just dead wrong. But the dress is white! But it looks wrong.
I am a bit puzzled. There are two possible goals to image processing: making an accurate reproduction and making a pleasing image. If you want an accurate reproduction, you start with a reference light source and a calibrated camera. If you didn't do that, creating an accurate reproduction is likely a fools errand.
The premise behind trying to make a white dress shot under a blue light be white by the numbers is rooted in trying to create an accurate reproduction of the dress. Is that really what you want to achieve? Trying to get an accurate reproduction of a scene shot under anything other than very simple color black body spectrum lights is fraught with peril. The real interaction between light and materials requires modelling the full spectrum, not just the R, G and B channel you capture in a camera. As a result, the color shifts under gelled lights is likely to not be simply represented in terms of any of the common methods for adjusting images.
In my personal experience, I find correcting by the numbers to be most useful when shooting with studio lights or in sunlight. Standard color balancing tools work really quite well there because the lights are well enough behaved that a resonably accurate reproduction is possible and generally in those cases I find that that it is easiest to create a pleasing image by starting with an accurate reproduction.
However, when shooting in mixed or complicated light, I often find that the best route to a pleasing image is to acknowlege the colored and deliberately leave the cast in. A scene shot next to a small incandescent bulb can look better with a orange cast because the viewer knows the light is colored. Similarly, the viewer knows that the lights on stage are colored, so leaving a white dress blue is completely reasonable. Beyond that, by leaving the cast on the dress you give your viewers a hint about the color of the light which can change their expectations about the colors in the rest of the image. So, on a stage lit with blue light the skin tone that creates the most pleasing photograph may in fact have a negative LAB b channel despite the fact that would be completely wrong if you were trying to create a reproduction.
The more I read what you are saying, the more it sounds like there is such a thing as "accurate" color and there is such a thing as "pleasing" color. Accuracy, as far as you are concerned, must be measurable by some instrument. What makes something pleasing, however, has a big fudge factor and is largely subjective. Because it is subjective, you think its hopeless to give any advice about how someone can go about making a photograph more pleasing (I may be overstating, but this is the impression I get). The best you think anyone can do is say: use your eyes, and there can be no better or worse in the realm of what's pleasing because it what's better can't be nailed down exactly by some instrument.
Is this a fair summary, or have I misrepresented your views in some way, or do I fail to appreciate some subtlety. (Keep in mind that nothing I've said has anything to do with gamuts, color spaces, calibration, profiles, or anything else that is technical. It could apply as well to painting as to photography.)
One other thing: I prefer "A rose by any other name would smell as sweet." to your Chinese proverb. And I like "A rose is a rose is a rose." better than either.
Not really. The interesting bit is when you say you neutralized the dress and the skin was wrong. Wrong visually and/or numerically? IOW, if all is well, both back up your opinion of this rendering. It also shows that BTN (using in this case neutral color on the dress) is just dead wrong. But the dress is white! But it looks wrong.
Neutralize the white gauze of the dresses and the flesh is way too yellow. Both BTN and to anyone's eye, I'm pretty sure. It's not at all subtle.
This is an unusual shot, even for theater light. Usually the lighting director doesn't want an obvious cast like this any more than a portrait or wedding photographer. But this is a famous ballet and the blue light has come to be a part of its presentation. Do the blue dresses actually look wrong to you, or do you just mean that they are wrong in that we know they are white and so some theory which neither of us agrees with dictates that we should neutralize the cast?
I don't believe we should always make known neutrals neutral or all flesh more yellow than magenta (as this shot shows.) But I think it's important (at least for me) to know when this isn't true and understand why not. Particularly flesh, vegetation, skies, hair, and fur need either to measure reasonable colors or I need to know why not. In my experience, it's the very rare shot which can stand to have these things wrong everywhere. (In this shot the flesh measures roughly right in at least some places and that was what I used as a guide. In the Giselle shot, everything is "wrong" BTN everywhere, but the light was very blue and very dark and the dancer is supposed to be dead.)
OK let me try again to see if I can make this clear.
Rest easy. It's quite clear. I'll reaffirm my respect for your knowledge and the world you inhabit.
The ICC paper spells this out, it was co written by a color scientist. If you want to take up the issue that accurate scene colorimetry doesn’t look correct output referred on the device (using this fellows examples), you two can debate this. And you can debate the earth is flat or the moon is made of cheese. Scientific analysis would prove you wrong.
No thank you. I have no interest in debating a color scientist about anything, possessing, as I do, a clear grasp of my knowledge and its limitations. Besides, that's not my world. It's your world and we don't live in the same place. My world is the world of deadlines and cranky clients. And my industry, my refererences to which you dismiss as not being to the point, has been, and remains, the place where the greatest effort has been made to translate the world of color labs and scientific papers into a workflow that will meet deadlines and satisfy cranky clients. Theoretical perfection notwithstanding, the results have been, and continue to be, checkered at best.
I've been your lab rat, Andrew. I've been there on the front lines while you guys have tested your theories and patiently explained to us "color operators" (not to be confused with color scientists, certainly), that the "theory is perfect, it's the implementation that's flawed," much like my old college professor used to say about Marxism. But however well or poorly the effort at color management is implemented, for me, the result remains the same: I'm standing at a light box studying an imperfect proof, trying to plot a strategy that will satisfy the client. That the imperfection of those proofs should be, theoretically, impossible, doesn't help me much when, despite an entire department dedicated to calibration and color management, they still come out wrong. The situation is exactly the same for a photographer studying an imperfect shot, trying to plot a strategy that will satisfy the client, even though, as is often the case with photographers, the client is themselves.
Which brings me to the real confusion that I have with your posts: why the continual, unrelenting condescension, scorn and ridicule you direct at Dan Margulis and his techniques?
Other than apparently defending Rutt (who needs no defending) or Dan (who does), what’s your point, what’s on your mind?
The point is that people read Dan's books and the quality of their images improves, they make money, clients go away happy. This is an immediate result, predictable and repeatable. When I'm plotting that strategy to please a client, I'm not going to call on a color scientist with a couple of profiles in his back pocket and a handful of calibration devices; I'm going to recall some basic techniques that I and color providers everywhere recognize as useful means to getting a desired result.
I don't doubt that there is a solid theoretical basis for considering this "beside the point." I'm sure if I study this thread again, in detail, I could begin to quote that theoretical basis. But in the world of deadlines and clients, that's all, well... beside the point.
There are two ways to slide through life: to believe everything or to doubt everything; both save us from thinking.
—Korzybski
The more I read what you are saying, the more it sounds like there is such a thing as "accurate" color and there is such a thing as "pleasing" color. Accuracy, as far as you are concerned, must be measurable by some instrument. What makes something pleasing, however, has a big fudge factor and is largely subjective.
In a nutshell yes only because there's no measurable matrix for pleasing color. There is for accurate color.
Because it is subjective, you think its hopeless to give any advice about how someone can go about making a photograph more pleasing (I may be overstating, but this is the impression I get).
Yes that's overstating it. Obviously there are many tools and techniques in raw converters and Photoshop to produce pleasing color even using numbers. But numbers are not the holy grail especially when based on an output color space like some CMYK device UNLESS you know, based on sending numbers to that device and examining the actual printed color. The origin of all this started in another post where I questioned using a set of CMYK values in an article for pinning skin tone and discussed the basis of such numbers on a very specific printing condition.
The best you think anyone can do is say: use your eyes, and there can be no better or worse in the realm of what's pleasing because it what's better can't be nailed down exactly by some instrument.
Use your eyes first, use numbers as well when they provide useful analysis of the pixels based on the correct color space. I've said several times (and its in print all over the web, in print and in my book), using your eyes on a calibrated profiled display in a color managed application WITH the proper feedback of numbers gives you the best of both worlds.
In a nutshell yes only because there's no measurable matrix for pleasing color. There is for accurate color.
Yes that's overstating it. Obviously there are many tools and techniques in raw converters and Photoshop to produce pleasing color even using numbers. But numbers are not the holy grail especially when based on an output color space like some CMYK device UNLESS you know, based on sending numbers to that device and examining the actual printed color. The origin of all this started in another post where I questioned using a set of CMYK values in an article for pinning skin tone and discussed the basis of such numbers on a very specific printing condition.
Use your eyes first, use numbers as well when they provide useful analysis of the pixels based on the correct color space. I've said several times (and its in print all over the web, in print and in my book), using your eyes on a calibrated profiled display in a color managed application WITH the proper feedback of numbers gives you the best of both worlds.
Andrew, much of the folks here on dgrin do use the CMYK values from Baldy's Blog ( one of the co-founders/owners of Smugmg). These CMYK values are not abstract values but ones taken from prints ordered via Smugmug and printed via EZPrints. There is an ICC profile available so that one can preview the appearance of the image in Photoshop. The link for the ICC profile is about 3/4s of the way down the second link I listed above.
Use your eyes first, use numbers as well when they provide useful analysis of the pixels based on the correct color space. I've said several times (and its in print all over the web, in print and in my book), using your eyes on a calibrated profiled display in a color managed application WITH the proper feedback of numbers gives you the best of both worlds.
Is this a fair summary of your philosophy for photo improvement? Is there more? Does it inform some specific techniques we can learn to employ? Examples? (Illustrated step-by-step examples of techniques are always much appreciated here on digrin.)
Oh, and it seems that you do think my use of LAB numbers to look for evidence of casts is appropriate ("proper" in your words.) True?
Andrew, much of the folks here on dgrin do use the CMYK values from Baldy's Blog ( one of the co-founders/owners of Smugmg). These CMYK values are not abstract values but ones taken from prints ordered via Smugmug and printed via EZPrints. There is an ICC profile available so that one can preview the appearance of the image in Photoshop. The link for the ICC profile is about 3/4s of the way down the second link I listed above.
I just checked the ICC profile, its RGB, made using the TC918 RGB target in ProfileMaker Pro (the product I use). So maybe the link is incorrect?
Why would you use an ICC RGB output profile for print, and use a CMYK profile for numbers when the two are not even close being equal.
Is this a fair summary of your philosophy for photo improvement? Is there more? Does it inform some specific techniques we can learn to employ? Examples? (Illustrated step-by-step examples of techniques are always much appreciated here on digrin.)
Oh, and it seems that you do think my use of LAB numbers to look for evidence of casts is appropriate ("proper" in your words.) True?
First, yes. But I'd quickly dismiss the numbers if I didn't care for the preview!
My philosophy, if you can call it that (which seems to put far too much emphasis on me) is based on work done since Photoshop 1.0 shipped, working with some pretty good photographers and experts (the top of my list would be Bruce Fraser who I've knew from the early 90's) along with Jeff Schewe, Mac Holbert (Nash Editions), JP Caponigro, as well as regular work with the authors of Photoshop (Thomas Knoll, Mark Hamburg).
In a nutshell, capture as much data as possible (high bit, wide gamut). Edit in such conditions and send as much data as possible to an output device using good color management practices. Whenever possible, work on layers, leave the original data untouched, produce color and tone correction early on (in scanner or raw converter), doing the biggest corrections first in the order specified (when mandated, ACR and LR have a recommended top down, left to right processing order). Soft proof and edit if necessary based on that for the output device and make as few proofs as possible to save time and money. Numbers are useful in some cases, early on but color appearance is more useful as viewing color in context within an image.
Also, one problem I read on the color theory list is this idea that the raw module should be set at some flat, default and that you should then use Photoshop to 'correct' the image. I think this is based on the mindset that if all you know how to use is a hammer, everything looks like a nail. With raw data, you're simply working with a very different kind of information than using Photoshop on a rendered image. Its a real bad idea to think of Photoshop as a post raw conversion correction tool for a lot of reasons (how the raw data is encoded, the fact that raw rendering is the only true non destructive means of editing since the original data is untouched, the use of metadata instructions, the way the various tools are applied to linear encoded data as opposed to gamma corrected baked color pixels). Photoshop is a fine pixel polisher. It's always been a 'one image at a time' process.
Lastly, test, test test! Don't take anyone's word for a technique that is necessary better. More complex processes isn't necessarily better but it often makes the user seem macho. There's the right way and the best way to process images. Sometimes, the best way doesn't allow you to get the job done in time even though the final data may be less pristine. For example, doing all kinds of multiple complex operations on 500 images of widgets on a white bkgnd might indeed produce slight quality benefits but if you can only process 200 of them within your time budget, it doesn't really matter. So I handle production work a bit differently than personal portfolio work; I will handle 500 2x2 images going off to a catalog differently than a 30x40 print I plan to output on my 4800.
Perfect example is picking a rendering intent for output. I ask my students what is the best rendering intent to us for color conversions to print. Some say Perceptual, some say Relative Colorimetric. The right answer is the one that produces the best color rendering that you prefer. Numbers can't help you here! ICC profiles don't know anything about images, only devices. So the best workflow would be to toggle the options using the Convert to Profile command or in the custom Soft Proof. If you have one image, no big deal, if you have 500, it takes far too long to look at each. Try one or two, pick one intent, build an action or other automation process to convert the other 499 images and be done.
Another example. For years, people have been told to convert a file to LAB to sharpen the L channel. Well that takes time and causes more data loss than just using USM and the Fade Luminosity option plus you also have the opacity slider. Are the mathematically identical? No, one tosses more data in the process but the end result is avoiding color fringing from the sharpening thanks to using the Luminosity blend mode. Better, Faster, Cheaper: pick any two.
When dealing with what makes a "good" skin tone, we are almost necessarily in an area where there is a huge fudge factor -- lots of room for disagreement. The reason for CMYK numbers is because people find them helpful and easy to understand.
I often use CMYK numbers (in the PS info palette) as a guide when editing skin tones in an RGB working space. I've tried to get some kind of easy handle on the RGB relationships that you could use as a rule of thumb for skin, and it I just haven't been able to. My web searches don't reveal anyone else who has found these relationships either.
So the basic reason for using CMYK numbers is because its easy, understandable, and it works given the pretty broad leeway there is for skin in the first place. To get me to switch to some other process, I'd need some pretty clear proof that it a) got me better results and/or b) saved time. So far, I haven't been shown any such approach using RGB numbers for skin.
When dealing with what makes a "good" skin tone, we are almost necessarily in an area where there is a huge fudge factor -- lots of room for disagreement. The reason for CMYK numbers is because people find them helpful and easy to understand.
Just as many find them difficult to understand. A lot depends on where you come from and what you are initially taught. Most photographers I work with don't 'get' CMYK at all and understand RGB. And forget CMYK numbers in many comon raw converters.
I often use CMYK numbers (in the PS info palette) as a guide when editing skin tones in an RGB working space. I've tried to get some kind of easy handle on the RGB relationships that you could use as a rule of thumb for skin, and it I just haven't been able to.
Why not just reference the RGB numbers in the info palette when over skin you know produces a desired color appearance? Now its based on the actual color space. Plus, once again, if for any reason, the CMYK color settings are not as originally set, the values are off. But the RGB numbers are what the actual pixels represent, not some conversion from those number (my basic beef using CMYK).
In a way, its like having someone speak English, having a translation into French and then back into English. Why the translation?
Why not just reference the RGB numbers in the info palette when over skin you know produces a desired color appearance? Now its based on the actual color space.
To do this I would have to develop a series of samples. I'm not sure how many I would need, but lets say 5-6 basic skin types (at a minimum), with at least 4 different exposure ranges for each. So, lets say conservatively I end up with 20 samples. Now I would have to open the appropriate file for comparison. To me that sounds like alot of extra time, although I could imagine getting quick at the process. Now, if someone showed me how this process regularly produced better results, then I would do it.
Right now, to see whether this got better results would involve lots of testing by me. In principle I am not oppossed to doing that sort of testing, but my time can still be better spent learning other things, since I already have a system that's doing alright for skin tones for most pictures.
Because for skin tones CMYK and LAB both provide an understandable relationship that can provide an important guideline as to whether you're on target or not. I've never found an RGB equivalent. If you can provide me with one, I'd be ecstatic. But so far, only CMYK and LAB can do this. I know that Y should be a few point greater than M, and that C should be a fraction of either of them. Or, in LAB, I know that B should be more than A. They're not hard and fast rules, but it's a great aid in making sure that your skin tones are correct. And this cannot be replicated in RGB.
The difference is akin to what happened when people finally admitted that the earth circled the sun, instead of being the center of the universe. All calculations became easier and the heavens more understandable.
I've got a few questions on your "philosophy" summary, which I thought was both clear and interesting.
You say to capture as much data as possible. One might conclude from this that more megapixels is always better, because more pixels means more data. It's pretty clear that that is not the case. So isn't the point to capture as much good data as possible. Larger pixels on the sensor may make for fewer numbers (less data), but for better image quality.
You also say to send as much data as possible to the output device. What is the point of sending an output device more data than it can use. If you know beforehand that the only place you will ever show your pictures is on the web at resolutions about 400x600, wouldn't that have alot of implications for your work? and allow you to cut alot of corners?
Why is it so important to work in layers and keep the original data clean on the bottom layer? As long as you save a backup copy of your original, then all of your original data is saved. When you go to a print, everything gets flattened anyway. So what is the problem with intermediate flattening (done judiciously), if it neither destroys your backup copy nor effects your final print? Is there something I don't understand about the value of layers?
You say to capture as much data as possible. One might conclude from this that more megapixels is always better, because more pixels means more data. It's pretty clear that that is not the case.
More doesn't necessarily mean better! I'd take a 13mp capture off a medium format back with say Hasselblad lens than a 16mp off a DSLR. I'd take a much smaller file off a true scanning back (Betterlight) than a single shot device (assuming nothing is moving). But once you pick the capture device, keep and use all the data it provides. Reducing its gamut or bit depth is what I'm suggesting you avoid.
You also say to send as much data as possible to the output device. What is the point of sending an output device more data than it can use. If you know beforehand that the only place you will ever show your pictures is on the web at resolutions about 400x600, wouldn't that have alot of implications for your work? and allow you to cut alot of corners?
One is speed and the other is ease of use. At the Epson Print Academy, where we speak, we try to dispel the idea of downsizing your files for print. IF the data you wish to use falls within a range of 180-460ppi, just send that to the driver. You save time, you're not making multiple versions and you're sending all the data in the document to the print driver. Once you setup the document to be say 20x30, it will (if you have enough data which you should look at) produce optimal quality as long as you don't go over 460 or less than 180.
Some 3rd party print drivers, for example ImagePrint can use the full 16-bit data path out to the printer. Photoshop will sample the 16-bit document on the fly to 8-bit using Print with Preview (Print in CS3).
Why is it so important to work in layers and keep the original data clean on the bottom layer?
With the exception of cloning say dust, keeping the underlying data intact allows you more flexibility in editing. So the only edits I actually stamp on the bkgnd layer are clones that I know I want forever. Otherwise, all work is done on adjustment or other layers.
As long as you save a backup copy of your original, then all of your original data is saved.
I prefer to keep a single master that has all the edits I need intact. It just makes file management easier, I always have the edits I can turn on or off based on what I wish to do with the data (where the data will be output).
When you go to a print, everything gets flattened anyway. So what is the problem with intermediate flattening (done judiciously), if it neither destroys your backup copy nor effects your final print? Is there something I don't understand about the value of layers?
Duffy
Correct, when you print, all the visible layers are in essence flattened. So much for non destructive editing <g>. And I will flatten some layers if I'm fully convinced that I want to stamp that data into some underlying data (but again, I keep the bkgnd layer alone).
From a practical standpoint, using one document with multiple layers or a dozen different document with specific edits provides the same results but the difference is having all the various edits in one place, with the ability to toggle them on and off based on your current needs and a much easier file management route. Also, depending on the edits and the blend modes, you'll get differing effects with multiple layers as opposed to multiple documents. The key (one of Mac's best teaching practices) is very through layer naming conventions. Everything must have a name. He recommends using the Annotation's as well to provide non printing instructions on what was done and why. If you have to revisit a file you worked on a year earlier, this can really be useful. Also useful having the various layers on one document when working with clients who want to see variations or want edits changed (not that a client would ever change their mind about what they want you to do ....).
Layers can, depending on their type take up less storage space than having the edits stamped onto the pixels. A layer with lots of transparency will take up less space than one with lots of pixel data. Adjustment layers take up very little space, they are the closest we have in Photoshop to true metadata editing instructions.
Another example. For years, people have been told to convert a file to LAB to sharpen the L channel. Well that takes time and causes more data loss than just using USM and the Fade Luminosity option plus you also have the opacity slider. Are the mathematically identical? No, one tosses more data in the process but the end result is avoiding color fringing from the sharpening thanks to using the Luminosity blend mode. Better, Faster, Cheaper: pick any two.
I am still a bit of a beginner at this, but after reading Dan's "Photshop LAB Color" book, I came to the conclusion that the real point behind going to LAB is to have independant control over contrast and saturation. RGB curves are the commonly recommended way for beginners to increase saturation, but the standard S curve has the by product of generally (but not universally) increasing saturation and they also end up with a bevy of subtle hue shifts. My impression is that the look of RGB curves is popular not because it is realistic, but rather because it mimics the behavior of high contrast slide films like Velvia.
In the digital world, I don't always want a film look, so I tend to lean in the direction of separating out my contrast, saturation and hue decisions. In my normal workflow, I tend to sort out my hue issses first; for typical photographs that is done with a combination of white balancing and calibration in Lightroom/ACR. Once I have the hues where I want them I go to Photoshop to increase contrast and saturation. I generally avoid doing major contrast adjustment in ACR or Lightroom because of color artifacts introduced by the RGB curves. Also, I prefer to settle on my final luminance curves after I have done any local contrast enhancement. A note here: sharpening, local contrast enhancement (HIROLAM sharpening in Dan's language), and Luminance curves are all contrast moves in my book.
The question I am left with is should I handle my contrast and saturation moves in RGB or LAB?
In RGB I create a Luminance Blend layer for the contrast moves and a Hue/Saturation layer for the saturation moves.
In LAB, I use a Curves layer for my saturation and luminance curves. LCE and sharpening happen in the L channel of the of the background.
If I am not doing anything fancy, I can get essentially the same result either way. However, I am not sold that I lose less of the original data by staying in RGB because there is an implicit colorspace conversion to HSL inside the Hue/Saturation layer. I am also not convinced that staying in RGB is faster because adding a luminance layer for sharpening increases my file size and memory use (does this get better in CS3? I am still using CS2).
In practice my default answer has been that if I am going to adjust saturation I prefer LAB over RGB. However, if all I am doing is increasing contrast (usually when I am sharpening at my output resolution), I prefer to avoid the color space conversion and use a Luminance layer in RGB. That said, I have a set of color space specific tricks I use for particuar image problems that will either force me to stay in RGB or force me to convert to LAB. Also, if I have a real tough nut image that I am going to fiddle with a lot, I tend to stick to RGB so I can leave all the layers in my ProPhotoRGB master rather than flattening them out when I do the colorspace conversion.
In the end, it seems like the sharpen in the L channel vs. sharpen in a luminance blend layer debate seems to me making a mountain out of a mole hill. I am happy to do either and I make my choice primarly based on what else I am doing to an image.
Comments
OK let me try again to see if I can make this clear.
There’s only one way to really define accurate color just as there’s only one way to define accurate distance, speed etc. You measure it. Anything else is someone’s interpretation of accuracy and all we can do is agree or disagree. There’s no way to say “but this is the measured color of what you placed in front of the camera”. If I measure the color, illuminant and dynamic range of a scene AND I send those numbers to your output device (a display) it not only doesn’t look like what you saw but looks pretty ugly. That is due to the issues known as the reference medium. Your display has a fixed gamut, contrast ratio and expects output, not scene referred colorimetry. The ICC paper spells this out, it was co written by a color scientist. If you want to take up the issue that accurate scene colorimetry doesn’t look correct output referred on the device (using this fellows examples), you two can debate this. And you can debate the earth is flat or the moon is made of cheese. Scientific analysis would prove you wrong.
Pleasing color is the color on some output device that you, or I or anyone likes. We can’t measure that and compare it to the scene to gauge accuracy because the reference media is totally different, as are a host of other factors. You can say that this display or print is showing you the color exactly as you recall seeing it but there’s no way to measure or gauge if what you believe to be true is or isn’t. So thus far, the debate, if there is one, is that using the term accurate color is bested reserved for describing the color of the scene and pleasing color is bested reserved for describing the numbers you produce when you get the color appearance you desire. If you want to call that accurate, OK but I think its sloppy and it doesn’t allow anyone any way to gauge if this is the case or not. If you like huge fudge factors in describing what it is you’re doing with a big pile of numbers on a computer, so be it.
We send all kinds of different numbers to devices in an attempt to produce matching color appearance and thanks to metameric matches, this is achieved every day.
And your point is what in reference to the discussion?
Other than what you say doesn’t surprise me in the least, again, what’s your point in the context of the current discussion? If you want to discuss the need for output specific standards like G7 or TR001 in the print industry I’m ready. If you want to admit that process control and press standards are not well implemented in the US, I’m with you and find the above comments about your proofing experiences in line with mine.
No I didn’t say that and in fact discussed where using numbers IS important and useful! But the question becomes, what numbers, based on what and to accomplish what?
Is someone attacking you because again, your above point doesn’t make any sense in the context of the discussion.
First, what basis do you apply to call this a myth? 2nd, the color model Dan now loves over CMYK, LAB IS based on human vision and the numbers define color appearance based on human perception WHEN talking about single patches of color (NOT color in context since there are no color appearance models at work here. If there were, the two patches in the optical illusion I illustrated would be accounted for). There does exist several device independent mathematically created spaces that define all colors humans can see. You can measure the colors and produce these values in a very non ambiguous way, just as you can use your speedometer in your car to gauge how fast you are moving. And I don’t know who has said this is an act of faith, other than you above. So again, most of your points are lost on me because I think you are trying to defend someone else or some other belief system but you’re not thus far doing so with anything other than emotion.
Other than apparently defending Rutt (who needs no defending) or Dan (who does), what’s your point, what’s on your mind?
Author "Color Management for Photographers"
http://www.digitaldog.net/
I guess I missed that. Measuring for neutrality? Or something else?
The measurements I make are not very fine at all. I mean that I don't pay that much attention to the exact numbers, only to some gross relationships. I find LAB measurements convenient because it's easy for me to see if something is green or blue instead of magenta or yellow (for example). Neutrality stands out to me because it's 0. I can tell if flesh is more magenta than yellow at a glance. No big deal. I'm sure I could learn to do this in RGB or CMYK or whatever. It just happens that I find the LAB numbers easy to use. So that is the answer to the questions what numbers.
As to what I want to accomplish with these numbers, it's detecting casts which when corrected will represent an improvement. I have some theories about when this is, but as I said before, this thread is about your ideas, not mine or Dan's or anyone else's (at least that's my goal.) I've learned that by doing this kind of measuring I can often detect the possibility of such an improvement when I wouldn't have done so otherwise.
So for me this is the single biggest question for you. Do you never do this and just trust your eyes and good color management all the time? If so, did you have to train yourself somehow to see what I don't seem to be able to see? Do you think I can learn it?
Neutrality, black and white clipping, saturation clipping of all colors.
A cousin of LAB and a much easier scale to use is based on LCH which I wish Adobe would add to the info palette. Chroma is where you'd look for neutrality (zero is neutral, 100 fully saturated). Its the aStar and bStar portions of LAB which are not at all intuitive but with LCH, color is defined using H for Hue (zero to 360, makes sense when you look at a hue wheel).
It is a lot easier to initially spot the degree of neutrality with a single value but again, in all RGB working spaces, neutral is R=G=B.
The method you're using is sound, the color space isn't really (my original point in the other forum post that started all this). CMYK is a very specific output color space, the numbers can vastly change from one space to the other. LAB makes MUCH more sense in this respect. Its based on human color vision and its self defining unlike RGB (with RGB, you can't provide numbers without an associated color space which is the same with CMYK).
I use numbers yes but I use what I see first and foremost because again, you're really just getting numeric values of one or a group of solid colors and very often, you're not getting the colors in context with the rest of the image which is important. So, knowing the values I clip shadows and highlights is useful, I don't always want to clip that data (sometimes I do). Knowing that a color is maxed out in saturation is useful. If I'm looking at skin tone, I want to see that part of the image in context with all the thousands of other similar and dissimilar colored pixels.
Ultimately the proof is in the print but I'd like to see that color appearance on screen before I click the Print button. If a lot of my work was solid colors (pie charts), working BTN would be far more effective but like the example of the cement and building shot at night, simply color correcting because you're produced a neutral isn't effective.
Try this, use the Set neutral (WB) eyedroppers in ACR, LR or Photoshop. Every hunted around, clicking and finding you didn't like the overall color? Yet every time you clicked, you did produce a neutral. The key is having an image where the neutrals should indeed be neutral (that's not going to fly on a sunset image). So its not just clicking on the right pixels to define a neutral, its looking at the entire image and deciding if the correction works for you or doesn't (again, neutral or case, neither is accurate but the right answer IS pleasing).
Author "Color Management for Photographers"
http://www.digitaldog.net/
Actually, for the Lab = 0,0,128, Dan uses the term "imaginary," instead of "impossible." For Dan, and I think for what you are talking about here, an impossible color is one that can be displayed, but does not actually occur anywhere under white light: examples are blue fur, green skin, purple grass, yellow skies, red Union army uniforms, etc... In this sense, impossible is something that falls on the extreme end of implausibility.
Duffy
Imaginary, impossible, what's he trying to say here?
Displayed can be output; the gamut of our displays is roughly sRGB in size, pretty small. And what output device is he referring to? CMYK SWOP ink on paper? Every output device on the planet? Implausible based on who's critera? Its a silly term he made up that doesn't mean anything.
I have seen images of blue fur and green skin (how about those crazy kids at Football games or someone on stage?). The entire concept is asinine.
Author "Color Management for Photographers"
http://www.digitaldog.net/
I've been standing on the sidelines here, but this is just melarkey. The concept of colors that don't occur naturally in nature (we're not talking about crazy kids with green paint on their faces, we're talking about natural skin color or natural fur color) is a real and useful concept to understand and know how to use. I personally find it useful and it can definitely help identify subtle color casts in many contexts and give you more specific direction for how to remove it.
You can say you don't personally find it useful, but many do, so calling the entire concept asinine is just showing how irrational you are being and these extreme positions about anything Margulis says are distracting from the times when you do have a useful point to make.
I know this concept is useful to me so I'm not going to bother to argue that with you. If you want to offer something better to replace it with that isn't just looking at the image with my eyes, I am open to new and improved ways of doing things.
Homepage • Popular
JFriend's javascript customizations • Secrets for getting fast answers on Dgrin
Always include a link to your site when posting a question
What's asinine is making up a term like impossible color when in fact, the color is real, can be captured, measured, defined numerically and output. Lots of these colors are natural. What's asinine is making up a term that doesn't mean anything, isn't at all well defined.
Think about the term. Impossible color. Well there are wavelets of light that fall outside human vision. I can't see anything beyond the 700nm or below say 400nm so OK, those are invisible, impossible colors that maybe a dog or fish can see. Again, its real easy to make up this stuff so you sound important, but explain what you're saying here. A color I CAN capture is impossible? Prove it. And what makes the term (which really sounds totally bogus to me) important?
Wow, lets be clear, what's asinine is the term as described to me. I didn't say the entire concept is asinine.
OK so define what is an impossible color that can be defined using our 24-bit systems and how calling this color impossible is useful.
Author "Color Management for Photographers"
http://www.digitaldog.net/
http://www.peteturner.com/Classics/index.html
At what numeric value (that's all Photoshop and computers can work with) does a impossible color become possible?
How many impossible colors can dance on the head of a pin?
How is making up terms that have such wiggle room help us?
Author "Color Management for Photographers"
http://www.digitaldog.net/
You may not like the term. But many, many people both understand what he is talking about and find it useful. As such, the idea behind the term is both meaningful and useful (to many), no matter how asinine you think it sounds. (BTW, I think the term "output-referred" sounds at least as awful, but since I understand what you mean by it, I won't object too hard to your use of it, even though I would never ever use the term myself.)
I also have a hard time believing that you don't understand the idea behind the term. If someone shows you a picture of a strawberry, and its stronger in the A channel than the B channel, that's a clue that there's something wrong. If it shows negative in the B channel, then there's something even further amiss. Strawberries aren't purple. I realize that's not a scientific statement, but I think its meaningful and something that most people can understand and use.
Now go to Rutt's question. What happens when many people's eyes don't spot these anomolies? That does happen -- I've seen it again and again on various forums. People think colors look just fine. Then someone plays with the pictures, finds that something was off, and fixes it based on the discovery. It's not uncommon for everyone to then agree that the picture is now better. With your approach, how would you make this sort of improvement?
Duffy
Honestly, the blues to the left of center are bluer than any sky I have ever seen. I'm not sure I would say that they are impossible (in Dan's sense) but they are certainly implausible. I look at a picture like that as having highly exaggerated colors, at least in the skies. More to the point, I think people might agree or disagree with me, but it would be a rare bird who would say "exaggerated colors? I have absolutely no idea what you are talking about."
Duffy
I must not have been clear. I use LAB not CMYK for this.
Yes, I do this all the time to try to find a good starting point. For the ballet shots, I can even talk to the lighting director to find out the color temperature of the lights he uses. But these days I find that I can do more with flesh tones, skies, snow, fur, water, &etc than with neutrals. When there really are neutrals, it's often the best clue, but if it leads to bad skin tones, then I won't use it.
The second ballet shot (the one with many dancers) is a good example. The dresses are white but the light is very blue. Neutralize the white dresses and the flesh tones are very wrong. Look at it carefully and you'll see the blue light on the faces. What I did was to get some places on the flesh that had A<=B in LAB and the rest looked right. You could see the blue light but also see that the faces were healthy flesh tones. The dresses ended up blue. Looked like what I saw. But I don't think that I (as opposed to someone else) could have done this without measuring and thinking about how it was all supposed to work together. Much more complicated than just setting the WB to make the known neutral dresses neutral.
Is this very different from what you would have done?
I've decided that I too should make up terms about color!
Universal color. This is color contained within our universe. Note that you should find out the exact boundaries of our universe as being off a light year here or there doesn't cut it.
Half color. This is color you see when only one eye is open. Try alternating your open and closed eye, do the colors look different?
Vegetable color. Everyone knows what Tomato red looks like right? Or Avocado green. Little reason to measure the color or use a color space and numbers, when we use a vegetable color, everyone knows what we're looking at and seeing.
I hope you find my new terms useful too in describing color!
It doesn't have a nice ring to it, I'd agree. However, ask any color scientist or engineer working in imaging if they know the term and they will. Please Google Output referred then Impossible Color and tell me what you find and referenced where. The ICC, a body of manufacturers and color scientists use the term all over their site when discussing the terms I've attempted to explain here.
I'm not a big fan of Perceptual Rendering but that's what the name is for this gamut compression algorithm. I don't really care for Unsharp Mask, so many people are confused by the Unsharp part but, the term has roots in analog photography and is used as such.
Why don't I just look at it? And what makes this color impossible, clearly it IS possible. Why not say its got a color cast or doesn't look like a strawberry?
No its not and worse, its got an enormous fudge factor since its totally undefined. So what LAB values automatically ensure the color is possible and then not possible?
Is this like the tree falling in the forest when no one is around?
Seriously. You're suggesting that some users see their images and don't find a problem but others do, fix it and original user is amazed. OK, that's useful I guess if we want to edit colors by committee. But it again, doesn't tell us very much about the user. What system did they have, was it calibrated? Did they look at the image and like it, only to find an edit done elsewhere was more pleasing? Was the user happy with every edit? Did it match the original?
Nothing you said here is something I'd disagree with. Clearly people find they prefer the color appearance of some images after others mess with them.
I would submit that the first thing to do IS teach them numbers (highlight, shadow, neutrality if necessary and desired), along with a display that produces a reasonably correct preview of the numbers.
I actually submitted an image I shot to the Color Theory list and had others mess with it. But this wasn't a simple bride in a dress or something illuminated under some standard appearing conditions. I submitted the image because its all about interpreting the image, much like the work of Pete Turner (by all means, lay the info palette over his stuff). And to be honest, I didn't like any of the renderings (all done from raw) as much as mine simply because this was a very unusual image of which there are many possible color options (if you want to see the shot, its the fourth on this web gallery: http://digitaldog.net/ARsAmazonPicks/ ). Now some of the renderings were quite interesting and obviously the person making the renderings preferred it.
There's a LOT of interpretation in this game. Numbers only work so well in so many cases. But I'm not dismissing their use. I do question using a CMYK output color space for generating the numbers. I do question making up terms for the club which outside make no practical sense. I mean, impossible colors? How about wrong? No, it doesn't sound sexy but it makes sense. I do question using the term accurate to describe something that can't be measured so this so called accuracy can be defined. You say this color is accurate, I say it isn't. How do we rectify this debate? Well we could actually measure the darn colors.
IF the color numbers you have produce the color on output you desire, OK, that's accurate to a degree but we can't measure this because I can assure you, the color numbers that produced the print you like are absolutely not the same colors you had in front of your camera.
Semantics? Maybe, but good teachers attempt to define terms that have little if any wiggle room should someone question what on earth they are trying to teach you. Making up terms, like Vegetable color might actually get some attention my way, but its not worth it because the term as I'm sure most of you would agree, is silly.
To end this (I really do want to go out and enjoy the 4th) let me leave you with an old Chinese proverb, which says: The first step towards genius is calling things by their proper name.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Not really. The interesting bit is when you say you neutralized the dress and the skin was wrong. Wrong visually and/or numerically? IOW, if all is well, both back up your opinion of this rendering. It also shows that BTN (using in this case neutral color on the dress) is just dead wrong. But the dress is white! But it looks wrong.
Author "Color Management for Photographers"
http://www.digitaldog.net/
The premise behind trying to make a white dress shot under a blue light be white by the numbers is rooted in trying to create an accurate reproduction of the dress. Is that really what you want to achieve? Trying to get an accurate reproduction of a scene shot under anything other than very simple color black body spectrum lights is fraught with peril. The real interaction between light and materials requires modelling the full spectrum, not just the R, G and B channel you capture in a camera. As a result, the color shifts under gelled lights is likely to not be simply represented in terms of any of the common methods for adjusting images.
In my personal experience, I find correcting by the numbers to be most useful when shooting with studio lights or in sunlight. Standard color balancing tools work really quite well there because the lights are well enough behaved that a resonably accurate reproduction is possible and generally in those cases I find that that it is easiest to create a pleasing image by starting with an accurate reproduction.
However, when shooting in mixed or complicated light, I often find that the best route to a pleasing image is to acknowlege the colored and deliberately leave the cast in. A scene shot next to a small incandescent bulb can look better with a orange cast because the viewer knows the light is colored. Similarly, the viewer knows that the lights on stage are colored, so leaving a white dress blue is completely reasonable. Beyond that, by leaving the cast on the dress you give your viewers a hint about the color of the light which can change their expectations about the colors in the rest of the image. So, on a stage lit with blue light the skin tone that creates the most pleasing photograph may in fact have a negative LAB b channel despite the fact that would be completely wrong if you were trying to create a reproduction.
Is this a fair summary, or have I misrepresented your views in some way, or do I fail to appreciate some subtlety. (Keep in mind that nothing I've said has anything to do with gamuts, color spaces, calibration, profiles, or anything else that is technical. It could apply as well to painting as to photography.)
One other thing: I prefer "A rose by any other name would smell as sweet." to your Chinese proverb. And I like "A rose is a rose is a rose." better than either.
Duffy
Neutralize the white gauze of the dresses and the flesh is way too yellow. Both BTN and to anyone's eye, I'm pretty sure. It's not at all subtle.
This is an unusual shot, even for theater light. Usually the lighting director doesn't want an obvious cast like this any more than a portrait or wedding photographer. But this is a famous ballet and the blue light has come to be a part of its presentation. Do the blue dresses actually look wrong to you, or do you just mean that they are wrong in that we know they are white and so some theory which neither of us agrees with dictates that we should neutralize the cast?
I don't believe we should always make known neutrals neutral or all flesh more yellow than magenta (as this shot shows.) But I think it's important (at least for me) to know when this isn't true and understand why not. Particularly flesh, vegetation, skies, hair, and fur need either to measure reasonable colors or I need to know why not. In my experience, it's the very rare shot which can stand to have these things wrong everywhere. (In this shot the flesh measures roughly right in at least some places and that was what I used as a guide. In the Giselle shot, everything is "wrong" BTN everywhere, but the light was very blue and very dark and the dancer is supposed to be dead.)
No thank you. I have no interest in debating a color scientist about anything, possessing, as I do, a clear grasp of my knowledge and its limitations. Besides, that's not my world. It's your world and we don't live in the same place. My world is the world of deadlines and cranky clients. And my industry, my refererences to which you dismiss as not being to the point, has been, and remains, the place where the greatest effort has been made to translate the world of color labs and scientific papers into a workflow that will meet deadlines and satisfy cranky clients. Theoretical perfection notwithstanding, the results have been, and continue to be, checkered at best.
I've been your lab rat, Andrew. I've been there on the front lines while you guys have tested your theories and patiently explained to us "color operators" (not to be confused with color scientists, certainly), that the "theory is perfect, it's the implementation that's flawed," much like my old college professor used to say about Marxism. But however well or poorly the effort at color management is implemented, for me, the result remains the same: I'm standing at a light box studying an imperfect proof, trying to plot a strategy that will satisfy the client. That the imperfection of those proofs should be, theoretically, impossible, doesn't help me much when, despite an entire department dedicated to calibration and color management, they still come out wrong. The situation is exactly the same for a photographer studying an imperfect shot, trying to plot a strategy that will satisfy the client, even though, as is often the case with photographers, the client is themselves.
Which brings me to the real confusion that I have with your posts: why the continual, unrelenting condescension, scorn and ridicule you direct at Dan Margulis and his techniques?
The point is that people read Dan's books and the quality of their images improves, they make money, clients go away happy. This is an immediate result, predictable and repeatable. When I'm plotting that strategy to please a client, I'm not going to call on a color scientist with a couple of profiles in his back pocket and a handful of calibration devices; I'm going to recall some basic techniques that I and color providers everywhere recognize as useful means to getting a desired result.
I don't doubt that there is a solid theoretical basis for considering this "beside the point." I'm sure if I study this thread again, in detail, I could begin to quote that theoretical basis. But in the world of deadlines and clients, that's all, well... beside the point.
—Korzybski
In a nutshell yes only because there's no measurable matrix for pleasing color. There is for accurate color.
Yes that's overstating it. Obviously there are many tools and techniques in raw converters and Photoshop to produce pleasing color even using numbers. But numbers are not the holy grail especially when based on an output color space like some CMYK device UNLESS you know, based on sending numbers to that device and examining the actual printed color. The origin of all this started in another post where I questioned using a set of CMYK values in an article for pinning skin tone and discussed the basis of such numbers on a very specific printing condition.
Use your eyes first, use numbers as well when they provide useful analysis of the pixels based on the correct color space. I've said several times (and its in print all over the web, in print and in my book), using your eyes on a calibrated profiled display in a color managed application WITH the proper feedback of numbers gives you the best of both worlds.
Author "Color Management for Photographers"
http://www.digitaldog.net/
Moderator of the Technique Forum and Finishing School on Dgrin
Is this a fair summary of your philosophy for photo improvement? Is there more? Does it inform some specific techniques we can learn to employ? Examples? (Illustrated step-by-step examples of techniques are always much appreciated here on digrin.)
Oh, and it seems that you do think my use of LAB numbers to look for evidence of casts is appropriate ("proper" in your words.) True?
I just checked the ICC profile, its RGB, made using the TC918 RGB target in ProfileMaker Pro (the product I use). So maybe the link is incorrect?
Why would you use an ICC RGB output profile for print, and use a CMYK profile for numbers when the two are not even close being equal.
The CMYK part is still quite odd to my thinking.
Author "Color Management for Photographers"
http://www.digitaldog.net/
First, yes. But I'd quickly dismiss the numbers if I didn't care for the preview!
My philosophy, if you can call it that (which seems to put far too much emphasis on me) is based on work done since Photoshop 1.0 shipped, working with some pretty good photographers and experts (the top of my list would be Bruce Fraser who I've knew from the early 90's) along with Jeff Schewe, Mac Holbert (Nash Editions), JP Caponigro, as well as regular work with the authors of Photoshop (Thomas Knoll, Mark Hamburg).
In a nutshell, capture as much data as possible (high bit, wide gamut). Edit in such conditions and send as much data as possible to an output device using good color management practices. Whenever possible, work on layers, leave the original data untouched, produce color and tone correction early on (in scanner or raw converter), doing the biggest corrections first in the order specified (when mandated, ACR and LR have a recommended top down, left to right processing order). Soft proof and edit if necessary based on that for the output device and make as few proofs as possible to save time and money. Numbers are useful in some cases, early on but color appearance is more useful as viewing color in context within an image.
Also, one problem I read on the color theory list is this idea that the raw module should be set at some flat, default and that you should then use Photoshop to 'correct' the image. I think this is based on the mindset that if all you know how to use is a hammer, everything looks like a nail. With raw data, you're simply working with a very different kind of information than using Photoshop on a rendered image. Its a real bad idea to think of Photoshop as a post raw conversion correction tool for a lot of reasons (how the raw data is encoded, the fact that raw rendering is the only true non destructive means of editing since the original data is untouched, the use of metadata instructions, the way the various tools are applied to linear encoded data as opposed to gamma corrected baked color pixels). Photoshop is a fine pixel polisher. It's always been a 'one image at a time' process.
Lastly, test, test test! Don't take anyone's word for a technique that is necessary better. More complex processes isn't necessarily better but it often makes the user seem macho. There's the right way and the best way to process images. Sometimes, the best way doesn't allow you to get the job done in time even though the final data may be less pristine. For example, doing all kinds of multiple complex operations on 500 images of widgets on a white bkgnd might indeed produce slight quality benefits but if you can only process 200 of them within your time budget, it doesn't really matter. So I handle production work a bit differently than personal portfolio work; I will handle 500 2x2 images going off to a catalog differently than a 30x40 print I plan to output on my 4800.
Perfect example is picking a rendering intent for output. I ask my students what is the best rendering intent to us for color conversions to print. Some say Perceptual, some say Relative Colorimetric. The right answer is the one that produces the best color rendering that you prefer. Numbers can't help you here! ICC profiles don't know anything about images, only devices. So the best workflow would be to toggle the options using the Convert to Profile command or in the custom Soft Proof. If you have one image, no big deal, if you have 500, it takes far too long to look at each. Try one or two, pick one intent, build an action or other automation process to convert the other 499 images and be done.
Another example. For years, people have been told to convert a file to LAB to sharpen the L channel. Well that takes time and causes more data loss than just using USM and the Fade Luminosity option plus you also have the opacity slider. Are the mathematically identical? No, one tosses more data in the process but the end result is avoiding color fringing from the sharpening thanks to using the Luminosity blend mode. Better, Faster, Cheaper: pick any two.
Author "Color Management for Photographers"
http://www.digitaldog.net/
I often use CMYK numbers (in the PS info palette) as a guide when editing skin tones in an RGB working space. I've tried to get some kind of easy handle on the RGB relationships that you could use as a rule of thumb for skin, and it I just haven't been able to. My web searches don't reveal anyone else who has found these relationships either.
So the basic reason for using CMYK numbers is because its easy, understandable, and it works given the pretty broad leeway there is for skin in the first place. To get me to switch to some other process, I'd need some pretty clear proof that it a) got me better results and/or b) saved time. So far, I haven't been shown any such approach using RGB numbers for skin.
Duffy
Just as many find them difficult to understand. A lot depends on where you come from and what you are initially taught. Most photographers I work with don't 'get' CMYK at all and understand RGB. And forget CMYK numbers in many comon raw converters.
Why not just reference the RGB numbers in the info palette when over skin you know produces a desired color appearance? Now its based on the actual color space. Plus, once again, if for any reason, the CMYK color settings are not as originally set, the values are off. But the RGB numbers are what the actual pixels represent, not some conversion from those number (my basic beef using CMYK).
In a way, its like having someone speak English, having a translation into French and then back into English. Why the translation?
Author "Color Management for Photographers"
http://www.digitaldog.net/
To do this I would have to develop a series of samples. I'm not sure how many I would need, but lets say 5-6 basic skin types (at a minimum), with at least 4 different exposure ranges for each. So, lets say conservatively I end up with 20 samples. Now I would have to open the appropriate file for comparison. To me that sounds like alot of extra time, although I could imagine getting quick at the process. Now, if someone showed me how this process regularly produced better results, then I would do it.
Right now, to see whether this got better results would involve lots of testing by me. In principle I am not oppossed to doing that sort of testing, but my time can still be better spent learning other things, since I already have a system that's doing alright for skin tones for most pictures.
Duffy
Because for skin tones CMYK and LAB both provide an understandable relationship that can provide an important guideline as to whether you're on target or not. I've never found an RGB equivalent. If you can provide me with one, I'd be ecstatic. But so far, only CMYK and LAB can do this. I know that Y should be a few point greater than M, and that C should be a fraction of either of them. Or, in LAB, I know that B should be more than A. They're not hard and fast rules, but it's a great aid in making sure that your skin tones are correct. And this cannot be replicated in RGB.
The difference is akin to what happened when people finally admitted that the earth circled the sun, instead of being the center of the universe. All calculations became easier and the heavens more understandable.
Dgrin FAQ | Me | Workshops
You say to capture as much data as possible. One might conclude from this that more megapixels is always better, because more pixels means more data. It's pretty clear that that is not the case. So isn't the point to capture as much good data as possible. Larger pixels on the sensor may make for fewer numbers (less data), but for better image quality.
You also say to send as much data as possible to the output device. What is the point of sending an output device more data than it can use. If you know beforehand that the only place you will ever show your pictures is on the web at resolutions about 400x600, wouldn't that have alot of implications for your work? and allow you to cut alot of corners?
Why is it so important to work in layers and keep the original data clean on the bottom layer? As long as you save a backup copy of your original, then all of your original data is saved. When you go to a print, everything gets flattened anyway. So what is the problem with intermediate flattening (done judiciously), if it neither destroys your backup copy nor effects your final print? Is there something I don't understand about the value of layers?
Duffy
More doesn't necessarily mean better! I'd take a 13mp capture off a medium format back with say Hasselblad lens than a 16mp off a DSLR. I'd take a much smaller file off a true scanning back (Betterlight) than a single shot device (assuming nothing is moving). But once you pick the capture device, keep and use all the data it provides. Reducing its gamut or bit depth is what I'm suggesting you avoid.
One is speed and the other is ease of use. At the Epson Print Academy, where we speak, we try to dispel the idea of downsizing your files for print. IF the data you wish to use falls within a range of 180-460ppi, just send that to the driver. You save time, you're not making multiple versions and you're sending all the data in the document to the print driver. Once you setup the document to be say 20x30, it will (if you have enough data which you should look at) produce optimal quality as long as you don't go over 460 or less than 180.
Some 3rd party print drivers, for example ImagePrint can use the full 16-bit data path out to the printer. Photoshop will sample the 16-bit document on the fly to 8-bit using Print with Preview (Print in CS3).
With the exception of cloning say dust, keeping the underlying data intact allows you more flexibility in editing. So the only edits I actually stamp on the bkgnd layer are clones that I know I want forever. Otherwise, all work is done on adjustment or other layers.
I prefer to keep a single master that has all the edits I need intact. It just makes file management easier, I always have the edits I can turn on or off based on what I wish to do with the data (where the data will be output).
Correct, when you print, all the visible layers are in essence flattened. So much for non destructive editing <g>. And I will flatten some layers if I'm fully convinced that I want to stamp that data into some underlying data (but again, I keep the bkgnd layer alone).
From a practical standpoint, using one document with multiple layers or a dozen different document with specific edits provides the same results but the difference is having all the various edits in one place, with the ability to toggle them on and off based on your current needs and a much easier file management route. Also, depending on the edits and the blend modes, you'll get differing effects with multiple layers as opposed to multiple documents. The key (one of Mac's best teaching practices) is very through layer naming conventions. Everything must have a name. He recommends using the Annotation's as well to provide non printing instructions on what was done and why. If you have to revisit a file you worked on a year earlier, this can really be useful. Also useful having the various layers on one document when working with clients who want to see variations or want edits changed (not that a client would ever change their mind about what they want you to do ....).
Layers can, depending on their type take up less storage space than having the edits stamped onto the pixels. A layer with lots of transparency will take up less space than one with lots of pixel data. Adjustment layers take up very little space, they are the closest we have in Photoshop to true metadata editing instructions.
Author "Color Management for Photographers"
http://www.digitaldog.net/
I am still a bit of a beginner at this, but after reading Dan's "Photshop LAB Color" book, I came to the conclusion that the real point behind going to LAB is to have independant control over contrast and saturation. RGB curves are the commonly recommended way for beginners to increase saturation, but the standard S curve has the by product of generally (but not universally) increasing saturation and they also end up with a bevy of subtle hue shifts. My impression is that the look of RGB curves is popular not because it is realistic, but rather because it mimics the behavior of high contrast slide films like Velvia.
In the digital world, I don't always want a film look, so I tend to lean in the direction of separating out my contrast, saturation and hue decisions. In my normal workflow, I tend to sort out my hue issses first; for typical photographs that is done with a combination of white balancing and calibration in Lightroom/ACR. Once I have the hues where I want them I go to Photoshop to increase contrast and saturation. I generally avoid doing major contrast adjustment in ACR or Lightroom because of color artifacts introduced by the RGB curves. Also, I prefer to settle on my final luminance curves after I have done any local contrast enhancement. A note here: sharpening, local contrast enhancement (HIROLAM sharpening in Dan's language), and Luminance curves are all contrast moves in my book.
The question I am left with is should I handle my contrast and saturation moves in RGB or LAB?
In RGB I create a Luminance Blend layer for the contrast moves and a Hue/Saturation layer for the saturation moves.
In LAB, I use a Curves layer for my saturation and luminance curves. LCE and sharpening happen in the L channel of the of the background.
If I am not doing anything fancy, I can get essentially the same result either way. However, I am not sold that I lose less of the original data by staying in RGB because there is an implicit colorspace conversion to HSL inside the Hue/Saturation layer. I am also not convinced that staying in RGB is faster because adding a luminance layer for sharpening increases my file size and memory use (does this get better in CS3? I am still using CS2).
In practice my default answer has been that if I am going to adjust saturation I prefer LAB over RGB. However, if all I am doing is increasing contrast (usually when I am sharpening at my output resolution), I prefer to avoid the color space conversion and use a Luminance layer in RGB. That said, I have a set of color space specific tricks I use for particuar image problems that will either force me to stay in RGB or force me to convert to LAB. Also, if I have a real tough nut image that I am going to fiddle with a lot, I tend to stick to RGB so I can leave all the layers in my ProPhotoRGB master rather than flattening them out when I do the colorspace conversion.
In the end, it seems like the sharpen in the L channel vs. sharpen in a luminance blend layer debate seems to me making a mountain out of a mole hill. I am happy to do either and I make my choice primarly based on what else I am doing to an image.
Artifacts? Can you elaborate?
Author "Color Management for Photographers"
http://www.digitaldog.net/