Why no 3-chip cameras?

TylerWTylerW Registered Users Posts: 428 Major grins
edited July 12, 2006 in Cameras
Ok, so this question is largely based on my long-time experience with digital video, so if my understanding of digital camera sensors is dead wrong here, that's why. But in all prosumer to professional grade camera, the image is recorded on three separate sensors, after the image from the lens has been split into its three components (red, green, blue). This affords a much more accurate color separation than the one provided by the diffraction grating used in single chip cameras.

Yet all digital still image cameras only use one sensor. Anyone know why the three-chip format hasn't been adopted? (other than that sensor cleaning would become an even bigger headache?)
http://www.tylerwinegarner.com

Canon 40d | Canon 17-40 f/4L | Tamron 28-75mm f/2.8 | Canon 50mm f/1.8 | Canon 70-200mm f/4 L

Comments

  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited July 12, 2006
    TylerW wrote:
    Ok, so this question is largely based on my long-time experience with digital video, so if my understanding of digital camera sensors is dead wrong here, that's why. But in all prosumer to professional grade camera, the image is recorded on three separate sensors, after the image from the lens has been split into its three components (red, green, blue). This affords a much more accurate color separation than the one provided by the diffraction grating used in single chip cameras.

    Yet all digital still image cameras only use one sensor. Anyone know why the three-chip format hasn't been adopted? (other than that sensor cleaning would become an even bigger headache?)

    Foveon sensors are the closest thing to what you are thinking of, and some folks really like the special image qualities they produce. Unfortunately, they are fairly expensive cameras and most other operational and functional properties are behind the other major manufacturers. They use a "layer" approach to maintain alignment in a single chip.

    The biggest manufacturing problem is alignment and calibration of the seperate image arrays. In video chips, the alignment is less of a problem because of the reduced pixel count, there is more "room for error". Even so, the arrays/chips are attached to a mono-block so they can't change their relationship.

    With still cameras, the resolution is high enough, that alignment is a rather severe problem. The "Bayer" chip and filter is a pretty refined process and difficult act to follow.

    ziggy53
    ziggy53
    Moderator of the Cameras and Accessories forums
  • TylerWTylerW Registered Users Posts: 428 Major grins
    edited July 12, 2006
    Ziggy, you are consistantly a font of knowledge. Thanks for the explanation.

    This makes sense, because our HD cameras only have one chip too. I guess thye use the bayer filter system as well, or more likely the Sony striped filter system.
    ziggy53 wrote:
    Foveon sensors are the closest thing to what you are thinking of, and some folks really like the special image qualities they produce. Unfortunately, they are fairly expensive cameras and most other operational and functional properties are behind the other major manufacturers. They use a "layer" approach to maintain alignment in a single chip.

    The biggest manufacturing problem is alignment and calibration of the seperate image arrays. In video chips, the alignment is less of a problem because of the reduced pixel count, there is more "room for error". Even so, the arrays/chips are attached to a mono-block so they can't change their relationship.

    With still cameras, the resolution is high enough, that alignment is a rather severe problem. The "Bayer" chip and filter is a pretty refined process and difficult act to follow.

    ziggy53
    http://www.tylerwinegarner.com

    Canon 40d | Canon 17-40 f/4L | Tamron 28-75mm f/2.8 | Canon 50mm f/1.8 | Canon 70-200mm f/4 L
  • BlurmoreBlurmore Registered Users Posts: 992 Major grins
    edited July 12, 2006
    I'm no engineer, but I think the easy answer falls into 2 categories. Why? and how much?

    First I'll try to tackle why. The Bayer filter pattern of interpolation is by no means ideal. In the early days it was responsible for rainbow fringes in striped shirts, and a host of other nasty aberrations and errors. The Foveon 3 chip actually attempted to 'fix' this problem but was only used in the Sigma SD9/SD10 camera. Using the theory that different colors of light penetrate silicon at different depths, the foveon actually recorded RGB at EVERY photosite, and did an OK job at it. The Fuji Super CCD also attempted to hedge the Bayer problem AND add dynamic range by seperating photosites into a honeycomb and using low and high luminance detectors in a grid to more accurately capture highlight and shadow detail. The first digital scanning backs for 4x5 cameras performed multiple passes for each color, and produced awesome images of static scenes for catalog and product shots but exposure times were in minutes.
    This also goes to why....A video camera has to simulate motion by recording 60 or so frames per second, to produce the motion the frames MUST be captured at a constant rate. A single CCD video camera using a RGB Bayer filter array works great in good light, color is recorded accurately and noise is decent. When the light falls the single CCD misses shadow information, and in high contrast scenes highlight detail is blown. BY using a beam splitter and sending each color to a designated color CCD, the dynamic response of each chip is easier to control as the whole chip records just one color. No worries of one photosite becoming over energized and spilling energy into an adjacent (different colored) site and creating noise, gain and response are fine tuned for just one color.


    So why not in a still camera? Because we control the time. Photographers don't have to take 60 frames per second to make motion. The other big cost would be resolution, to create an 3504 x 2336 image you would need 3 seperate 8.4 MP sensors. The Foveon cameras used to try to spin their MP stats by saying that a 3 MP picture was really equivalent to a 9 MP picture because it recorded every color at every site, but the pxiel dimensions of the image were still only equivalent to a 3 MP image. You would sacrifice live view (something lost to motion pictures since the birth of video), unless a prism could be devised to work in conjunction with the beam splitter. I could see only limited applications for ultra low light highspeed photography or astro photography where using a 3CCD still camera could out come out a winner cost to benefit.
  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited July 12, 2006
    Blurmore wrote:
    I'm no engineer, but I think the easy answer falls into 2 categories. Why? and how much?

    First I'll try to tackle why. The Bayer filter pattern of interpolation is by no means ideal. In the early days it was responsible for rainbow fringes in striped shirts, and a host of other nasty aberrations and errors. The Foveon 3 chip actually attempted to 'fix' this problem but was only used in the Sigma SD9/SD10 camera. Using the theory that different colors of light penetrate silicon at different depths, the foveon actually recorded RGB at EVERY photosite, and did an OK job at it. The Fuji Super CCD also attempted to hedge the Bayer problem AND add dynamic range by seperating photosites into a honeycomb and using low and high luminance detectors in a grid to more accurately capture highlight and shadow detail. The first digital scanning backs for 4x5 cameras performed multiple passes for each color, and produced awesome images of static scenes for catalog and product shots but exposure times were in minutes.
    This also goes to why....A video camera has to simulate motion by recording 60 or so frames per second, to produce the motion the frames MUST be captured at a constant rate. A single CCD video camera using a RGB Bayer filter array works great in good light, color is recorded accurately and noise is decent. When the light falls the single CCD misses shadow information, and in high contrast scenes highlight detail is blown. BY using a beam splitter and sending each color to a designated color CCD, the dynamic response of each chip is easier to control as the whole chip records just one color. No worries of one photosite becoming over energized and spilling energy into an adjacent (different colored) site and creating noise, gain and response are fine tuned for just one color.


    So why not in a still camera? Because we control the time. Photographers don't have to take 60 frames per second to make motion. The other big cost would be resolution, to create an 3504 x 2336 image you would need 3 seperate 8.4 MP sensors. The Foveon cameras used to try to spin their MP stats by saying that a 3 MP picture was really equivalent to a 9 MP picture because it recorded every color at every site, but the pxiel dimensions of the image were still only equivalent to a 3 MP image. You would sacrifice live view (something lost to motion pictures since the birth of video), unless a prism could be devised to work in conjunction with the beam splitter. I could see only limited applications for ultra low light highspeed photography or astro photography where using a 3CCD still camera could out come out a winner cost to benefit.

    Blurmore, you are consistantly a font of knowledge. Thanks for the explanation. (Where did I hear that?)

    Seriously thanks, good explanation.

    ziggy53
    ziggy53
    Moderator of the Cameras and Accessories forums
  • wxwaxwxwax Registered Users Posts: 15,471 Major grins
    edited July 12, 2006
    Great stuff. bowdown.gif

    One small thing, video is normally 30fps.
    Sid.
    Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
    http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
  • BlurmoreBlurmore Registered Users Posts: 992 Major grins
    edited July 12, 2006
    wxwax wrote:
    Great stuff. bowdown.gif

    One small thing, video is normally 30fps.

    yeah, wasn't sure about that, isn't something 60? maybe 16 movie film? Sorry. Not a video person.
  • TylerWTylerW Registered Users Posts: 428 Major grins
    edited July 12, 2006
    wxwax wrote:
    Great stuff. bowdown.gif

    One small thing, video is normally 30fps.

    Video (NTSC) is 30 fields but to get the appropriate data into all those fields, you're actually recording at 60 fps.
    http://www.tylerwinegarner.com

    Canon 40d | Canon 17-40 f/4L | Tamron 28-75mm f/2.8 | Canon 50mm f/1.8 | Canon 70-200mm f/4 L
  • wxwaxwxwax Registered Users Posts: 15,471 Major grins
    edited July 12, 2006
    TylerW wrote:
    Video (NTSC) is 30 fields but to get the appropriate data into all those fields, you're actually recording at 60 fps.
    Well, needless to say I don't know that technical stuff! I just know our counters show 30 per second.

    So I asked! Turns out it's reversed - 30 frames per second, two fields in each frame, so 60 fields per second.

    No biggie tho, blurmore's explanation is terrific and very helpful. And along the way, due to the my additional education, I learned more about drop frame and why I should always work in drop frame. So a bonus! thumb.gif
    Sid.
    Catapultam habeo. Nisi pecuniam omnem mihi dabis, ad caput tuum saxum immane mittam
    http://www.mcneel.com/users/jb/foghorn/ill_shut_up.au
  • colourboxcolourbox Registered Users Posts: 2,095 Major grins
    edited July 12, 2006
    The catch is that NTSC is 30 interlaced frames per second (actually 29.97fps). You're not recording 60 frames, you're recording 60 half-frames of alternating lines that are interlaced back together for 30 frames.

    If you are recording on one of the new progressive-scan digital formats, there may not be interlacing. (Some HD formats are interlaced, some are not.) With progressive scan, every line is captured in order from top to bottom, just like in your digital camera, and 30 fps is 30 full fps.

    The interlacing and the 29.97 are workarounds to cope with the analog electronics from the era when color TV was invented...which was way back between the two World Wars. Tech is better now.:D
  • erich6erich6 Registered Users Posts: 1,638 Major grins
    edited July 12, 2006
    Also, with a beam splitter you pay a price in signal-to-noise...not too important for video at decent lighting but critical for image quality in photographs. Then again, the noise of my 3CCD video camera from indoor lighting scenes drives me nuts.

    There are no diffraction gratings used in SLR's that I'm aware of....ne_nau.gif That would disperse the light into different colors before reaching the sensor. The sensor pixels are actually filtered at the chip for their respective RGB response.

    Erich
Sign In or Register to comment.