debarralized Fisheye and image quality

Manfr3dManfr3d Registered Users Posts: 2,008 Major grins
edited August 13, 2006 in Finishing School
I am seeking advice from someone who has experience with
debaralizing images taken with a (Canon 15mm/2.8) fisheye lens.

There are zillions of tutorials and discussions on how to post
process fisheye pictures online (debarralize, stitch to pano, etc. etc.)
as well as test reviews that measure the image quality of these
lenses. But noone shows a comparison of the image quality between
the images taken from a normal wide angle lens vs. those that
were taken with a fisheye and then debaralized in pp.

Any input is very welcome!
“To consult the rules of composition before making a picture is a little like consulting the law of gravitation before going for a walk.”
― Edward Weston

Comments

  • Manfr3dManfr3d Registered Users Posts: 2,008 Major grins
    edited August 7, 2006
    First, thank you for the reply!

    Those links are exactly what I found too,
    when researching this topic. But other than
    "ImageAlign [...] renders quality output" or
    "Panorama Tools employs a high-quality
    sampling algorithm with negligible image
    degradation" little to no objective information
    is given about scientific (measured) image
    quality after debarralizing a fisheye.

    It's probably not very hard to render "quality output"
    if the lens has less than 2% distortion, which is
    a common value of lenses. But a fisheye is
    somewhat extreme because the pixels from the
    edges of the frame need to be shifted alot more
    if one wants to get rid of the barrel distortion.

    The image needs to be strechted (edges get pulled
    away from the center of the image). Pixels are
    finite elements so stretching an image will result
    in "spaces" between the original pixels that need to
    be filled by new pixels, via interpolation (some downsize).

    Obviously the new image can't be better in terms
    of resolution and contrast than the original one.

    But by which factor ... nobody seems to know. :cry
    “To consult the rules of composition before making a picture is a little like consulting the law of gravitation before going for a walk.”
    ― Edward Weston
  • ziggy53ziggy53 Super Moderators Posts: 24,133 moderator
    edited August 8, 2006
    Manfr3d wrote:
    First, thank you for the reply!

    ...

    The image needs to be strechted (edges get pulled
    away from the center of the image). Pixels are
    finite elements so stretching an image will result
    in "spaces" between the original pixels that need to
    be filled by new pixels, via interpolation (some downsize).

    Obviously the new image can't be better in terms
    of resolution and contrast than the original one.

    But by which factor ... nobody seems to know. :cry

    I think the problem is that there is not just one "fisheye" specification for a lens. The more distortion you have to correct for, and whether the lens is used on a crop camera, etc., can have dramatic effects on the outcome.

    Likewise, there is not one method of correction but several, so the software used will have a direct effect on the result.

    If you are talking about a 180 degree fisheye, in a circular aspect, completely rectilinear corrected in software, shot with an 8MPix imager, I would be willing to bet that the actual usable resolution would compare in details to a true rectilinear lens at about 1MPix resolution. In other words, you would loose a lot compared to a true optical lens of the same focal length. (A Sigma 12-24mm, f4.5-5.6, EX-DG, on a full-frame camera is the widest rectilinear lens that I know of and at 12mm has an angle of view of 122 degrees, for comparison.) (OK, the Voigtländer/Cosina 12mm, f5.6 is also a full-frame, but much more market limited lens, also rated at 121 degrees.)

    One way to simulate the loss accurately, would be to take a good quality image at full resolution with a fully corrected lens, distort it to the fisheye equivalent you desire, and then reconvert it back to corrected perspective and rectilinear. The loss should be about exactly twice that of starting with a fisheye and correcting to rectilinear.

    Try that and let us know your results.

    Thanks,

    ziggy53
    ziggy53
    Moderator of the Cameras and Accessories forums
  • spider-tspider-t Registered Users Posts: 443 Major grins
    edited August 13, 2006
    DxO Optics
    Not sure if this will help, but I love this software. It has some great batch processing capability. And it also does de-barrelize.

    It works to the exact specs of your camera and your lens. They have a trial version.

    http://www.dxo.com

    cheers,
    Trish
  • Manfr3dManfr3d Registered Users Posts: 2,008 Major grins
    edited August 13, 2006
    I was trying to carry out ziggy's idea with fishing
    and defishing. But that brings up two problems.

    First, debarrelizing works with most lens correction
    programs for fisheyes only in one direction: debarrelize
    or defish.

    Secondly, if I fish an image and then later defish it,
    to compare the output to the original image, then the
    image quality (iq) will depend on two image manipulations.

    One couldn't tell from the output by which factor the
    degeneration of the iq came from the defishing.

    I thought it would be most logical to create an artificial
    image with highest contrast and resolution, have it debarrelized,
    and measure the iq of the output.

    One could argue that the best approach would be to create a fisheye
    looking image instead of one with a rectangular raster. I decided
    to go for the raster. This way I avoid the mathematics and programming
    effort to create such an image (as well as the discussion about the use
    of anti aliasing or not).

    The image I crated has a size of 4992x3328 pixel
    (~16MP, 1DsMkII) uncompressed jpg. The pattern
    throughout the image looks like this:

    16mp_rb_crop_center_400.jpg

    I loaded this image in PTLens and set the options
    to "fisheye" and distortion to "141". These settings
    defish images taken with the Canon 15mm/2.8 fisheye
    on full frame cameras.

    The results were interesting. There were very strong
    Moiré effects due to the nature of the pixel pattern
    of the original image.

    Near the center, 100% crop:
    16mp_rb_pt_moree_100.jpg

    Comparing the source image with the resulting one,
    one can see several things. The first is that the
    contrast and resolution vary alot troughout the whole
    frame. Lets take a look at what it looks like along
    the horizontal axis.

    In the center where the least distortion took place,
    one can see that pixel have been enlarged by a 300%
    to 400%.

    16mp_rb_pt_crop_center_400.jpg
    Center, 400% crop.


    Half way between center and left side, pixels seem
    to have merged (loss of contrast) and enlarged by ~200%:

    16mp_rb_pt_crop_left_of_center_400.jpg
    Between Center and Left side, 400% crop.


    On the very left side of the frame pixels seem to
    have merged (loss of contrast) and enlarged by ~200%:


    16mp_rb_pt_crop_left_400.jpg
    Left side, 400% crop.


    Looking along the bottom of the usable frame
    (more on this in a moment) one can see:


    Pixel merging and enlargement ~200% in the bottom,
    between vertical center and left side of the frame:

    16mp_rb_pt_crop_left_bottom_400.jpg
    bottom, between vertical center and left side of frame, 400% crop.


    Pixel merging and enlargement ~200% in the bottom,
    left corner of the frame:

    16mp_rb_pt_crop_left_of_center_bottom_400.jpg
    bottom left of frame, 400% crop.


    The enlargement obviously results in loss of
    resolution and what I called merging, in loss
    of contrast.


    Aside from pixel peeping the usable image size also
    changes when one defishes the original image.

    Original image (1/11):
    16mp_rb_1to11.jpg


    Defished image (1/11, here, most moree comes from downsizeing):
    16mp_rb_pt_1to11.jpg


    In the beginning we had 4992x3328 pixel (~16MP) the
    croped output only allows a 4992x1869 pixel (~9MP)
    rectangular crop at maximum. Thats equals a reducton of 28%.


    I'm not sure if the enlarged pixels we saw, came from
    the distortion only, of if they have formed because of the
    Moiré effect. It would also be thinkable that the enlarged
    pixels have been formed by a number of pixels of equal
    color in the same area. In this case measuring the iq
    becomes very difficult. Some more testing with different
    pixel pattern seems to be necessary before one can make
    a final conclusion.

    For now the intermediate result is:

    28% rediction in image size. Pixel are enlarged by ~300%.
    These degenerations add linearly, so the resulting loss
    in image quality is at least 328% (~5MP vs. ~16MP).

    This is my first try to compare image quality of two
    images. Thus, there are likely errors in my assumptions
    and in my interpretation of the results. I dind't measure
    the "merging" for example. Resulution and contrast vary so
    strongly across the frame that the evauation should probably
    be done by some sort of testing program (which I don't own).
    I'd be happy if you can point out my errors or contribute to
    my observations.
    “To consult the rules of composition before making a picture is a little like consulting the law of gravitation before going for a walk.”
    ― Edward Weston
Sign In or Register to comment.