Options

Raw files and Pixel per Inch Questions

RogersDARogersDA Registered Users Posts: 3,502 Major grins
edited December 26, 2010 in Finishing School
Sitting here in San Francisco having a major brain f*rt.

I have not used Adobe Bridge in years. But, I decided to look at some raw files today in Bridge CS3. Looking at the metadata for some raw files shows the following:
Pixel Dimensions --> 5616 x 3744
Resolution --> 72ppi
Bit Depth --> 8

I was under the impression that raw files had no bit depth, and certainly no pixel resolution.

Now, to get me smarter for the new year, when importing raw images via Bridge or Lightroom, what is the difference if I use 300 ppi or 782 ppi? I am still dealing with a total of 5616 x 3744 pixels once imported, right? The ppi setting is used solely to determine the print size, right?

So If I output from Lightroom a jpg file that is 5616 x 3744 pixels for uploading to SmugMug what difference does it make if the file is processed in Lightroom at 300 ppi or 72 ppi?

Comments

  • Options
    colourboxcolourbox Registered Users Posts: 2,095 Major grins
    edited December 26, 2010
    Raw files have a bit depth. It is whatever the camera processor saves. Used to be typically 12-bit, now it's typically 14-bit or better. Gets upconverted to 16-bit when it enters your raw converter.

    The important thing about the resolution is that whether you input 72 or 300, the dimensions, in pixels, is the same: 5616x3744. That means no information is lost or gained. (Unless you resample.) It is merely changing the density of the pixels, and with resampling off, changing the resolution forces the printed size to change. You can try this out yourself in Photoshop Image Size. Turn off resampling first, then change the dpi. you will get:

    5616 x 3744 at 72dpi = 78 inches x 52 inches
    change 72dpi to 300dpi...
    5616 x 3744 at 300dpi = 18.72 inches by 12.48 inches
    Same info, same pixels, only difference is that they're packed tighter together
  • Options
    BradfordBennBradfordBenn Registered Users Posts: 2,506 Major grins
    edited December 26, 2010
    I cannot comment on the resolution issue, but I do understand the bit depth issue. It is the same idea as audio and most digital systems. Bit Depth defines how many different steps there are in the sampling of the value. It is a way of describing how accurately the measurement can be made on the sample. The sample is a single slice of time and the values for it, the bit depth indicates how many possible steps there are in the sample. Each pixel has a bit depth of the same amount.

    So allow me to use a simple example if one is to use the average distance of the Earth to the Moon (384,400km or 238,855 miles) as the sample we are trying to measure the step size ranging from 8 bit to 16 bit to 24 bit to 32 bit depth will change dramatically, until it reaches a point of vanishingly small. The fact that a RAW image typically has 3 color channels all using the same bit depth is going to be ignored for now, just assume that if we are doing this level of precision for one channel we are doing it for all. There are some idiosyncrasies about the way that the color gamut works with the samples that I am not qualified to comment on, which is why I am "glossing over" them for now.

    If we measure from the Earth to the Moon in 8 bit depth, it means that there are 256 steps of 1,501.56km/933.03 miles available. As a result the accuracy of the distance can only be within that step size; so every sample can be thought of as accurate to that sample size, or +/- 933.03 miles/1,501.56 km

    If we measure the distance in 12 bit depth, it means that there are 4,096 steps of 93.85 km/58.31 miles available.

    If we measure the distance in 14 bit depth, it means that there are 16,384 steps of 23.46 km/14.58 miles available.

    If we measure the distance in 16 bit depth, it means that there are 65,536 steps of 5.87 km/3.64 miles available.

    If we measure the distance in 24 bit depth, it means that there are 16,777,216 steps of 22.91 meters/75.17 feet available.

    If we measure the distance in 32 bit depth, it means that there are 4,294,467,296 steps of 0.09 meters/3.52 inches available.

    So as you see it starts to get amazingly small. However since that is just one dimension (distance) and not three colors it actually starts to get multiplies quite quickly. There is the same amount of sample accuracy for each color and with three colors, 8 bit really means that there are 256 samples for Red, 256 samples for Green, and 256 samples for Blue. Ignoring some of the gamut and color addition issues it actually means that 3 separate data streams of 8 bit color has 16,777,216 total values or also thought of as 24 bit (8 times 3).

    I am sure I overlooked something but that is the theory as I understand it from the audio world.

    Now as to why not do everything at higher bit depth, at some point it ends up not having enough reward for the effort. Every time you get more resolution, you add more storage and computational overhead, one of the reasons that CDs are "only" 44.1 kHz sampled at 16 bit. That is still very intensive, to the point where self noise is what is the limiting factor. Also called Johnson–Nyquist noise, or thermal noise, it is the point where the circuit itself at rest makes more noise than what the circuit is trying to measure.

    Does that help?
    -=Bradford

    Pictures | Website | Blog | Twitter | Contact
Sign In or Register to comment.