Interesting new deblurring technology from Msft

RichardRichard Administrators, Vanilla Admin Posts: 19,954 moderator
edited August 4, 2010 in Cameras
http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/

OK, so it's only in the lab at the moment and is not something I would want to attach to my camera, but the results are pretty impressive. Give it a few years to shrink and improve...

Comments

  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited August 1, 2010
    Nice catch Richard. thumb.gif
    ziggy53
    Moderator of the Cameras and Accessories forums
  • ian408ian408 Administrators Posts: 21,934 moderator
    edited August 1, 2010
    Interesting idea. I'm curious what it adds to the image capture time?
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited August 1, 2010
    ian408 wrote: »
    Interesting idea. I'm curious what it adds to the image capture time?

    If I understand the proposed system the image's anti-blur processing time would be deferred until outside the camera, i.e. post-processing with a computer and software. The image capture would be unaffected except for adding a new "tag" of data with the measured camera shake angle and camera shake velocity. The post-processing would simply plug the camera shake data into existing deconvolution software. (The "aided blind deconvolution" described in the article.)

    I think we are many years away from putting deconvolution processing into a camera body.

    Anyone can explore deconvolution now by using the simple forms of deconvolution found in the Photoshop "Smart" sharpen and RAWTherapee deconvolution sharpening. More advanced algorithms are available in "Image Analyzer" freeware (Windows). Of course, all of these require you to "guess" the image shake extent and direction, data that would be included in Richard's link.

    http://logicnet.dk/Analyzer/
    ziggy53
    Moderator of the Cameras and Accessories forums
  • ian408ian408 Administrators Posts: 21,934 moderator
    edited August 2, 2010
    ziggy53 wrote: »
    If I understand the proposed system the image's anti-blur processing time would be deferred until outside the camera...

    I got that. But you still have to record the "shake" data and somewhere in my quick scan of the paper, it was being calculated per-pixel (which seems odd). That's a lot of additional data.

    I plan on reading the paper more thoroughly although I think the easiest way to steady a hand held camera might still be a gyroscope :D
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • RichardRichard Administrators, Vanilla Admin Posts: 19,954 moderator
    edited August 2, 2010
    ian408 wrote: »
    I got that. But you still have to record the "shake" data and somewhere in my quick scan of the paper, it was being calculated per-pixel (which seems odd). That's a lot of additional data.

    I took a brief look at the full paper and it is mostly way beyond me. My impression though is that the additional data is only that which is supplied by the accelerometers, just a few numbers really. The per-pixel calculations are then done on the captured image using the motion parameters as an additional input to the algorithm. What I haven't understood yet is what they mean by "prior" image. headscratch.gif Need more coffee.
  • ian408ian408 Administrators Posts: 21,934 moderator
    edited August 2, 2010
    By "prior", I think they mean original before any processing is done.
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • RichardRichard Administrators, Vanilla Admin Posts: 19,954 moderator
    edited August 2, 2010
    ian408 wrote: »
    By "prior", I think they mean original before any processing is done.
    Hmm...I had more coffee and found some articles that were even harder to understand than the one I posted. rolleyes1.gif But I'm pretty sure now that the prior is computationally generated, and that different functions have been used with varying degrees of success. Are there any signal detection pros out there who could clarify? It's all very blurry to me. mwink.gif
  • fdisilvestrofdisilvestro Registered Users Posts: 6 Beginner grinner
    edited August 2, 2010
    Suppose you take a picture of a point of light. The effect of lens distortions, AA filter, camera shake, just to name a few negative effects, will result in a distorted image where that point is not longer a point but maybe an irregular shape.

    The mathematical function that describes the conversion from a point to that irregular shape is called a Point Spread Function or PSF.

    In theory, if you know precisely that PSF you should be able to reconstruct the original image by the means of mathematical deconvolution (up to a certain degree)

    In practice, knowing the exact PSF is almost impossible, so a guess has to be made. There are commercial softwares that use deconvolution to restore small defocusing:

    Focus magic
    Focus fixer
    Photoshop smart sharpen using lens blur
    Lightroom 3 sharpness provided you use 100 in the detail slider

    Deconvolution is used also in capture sharpening, to restore the image after the AA filter degradation

    What I understand from the paper is that they are trying to model the PSF based on motion data, which should give better results than using a general guess

    There is an excellent thread on the topic on the Luminous-Landscape forum here
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited August 3, 2010
    Interesting, R, thanks!

    Funny thing, I can see sharp tho I am moving, my head is bobbing, my eyes are sliding this way and that... seems like we are not noticing something that is very sharp in what nature does in our visual system.headscratch.gifwink:D

    And I think it's this, our vis sys *causes* blurring and therefore knows the exact nature of the blur to compensate for to get sharpness. On the other hand, the tech we have is chasing the horse after it has bolted, trying to calculate its trajectory from movements we can't control. What if we make gear that like our vis sys doesn't aim to calculate blur that is created by extraneous factors, but which creates blur to overwhelm those extraneous factors, with known blur which it itself creates, and therefore can precisely backwards engineer to sharpness.

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • pathfinderpathfinder Super Moderators Posts: 14,703 moderator
    edited August 3, 2010
    Neil, what you "see" is not what registers directly on your retina in a one to one ratio.

    Your retina is a source of image data, but your "vision" is really a construct of your occipital cortex and your consciousness. When we turn our head, the world seems stable around us, but just look at a video monitor as you spin a video camera - "vision" is not just like a camera image, despite the frequent statement that the eye is like a camera. That statement while true is also not an accurate description of human vision...
    Pathfinder - www.pathfinder.smugmug.com

    Moderator of the Technique Forum and Finishing School on Dgrin
  • RichardRichard Administrators, Vanilla Admin Posts: 19,954 moderator
    edited August 4, 2010
    NeilL wrote: »
    What if we make gear that like our vis sys doesn't aim to calculate blur that is created by extraneous factors, but which creates blur to overwhelm those extraneous factors, with known blur which it itself creates, and therefore can precisely backwards engineer to sharpness.

    Neil

    My (feeble) understanding is that is what deconvolution software attempts to do by estimating distortion introduced by optical imperfections in lenses, sensor noise, sloppy focus and in this case, camera motion.
  • ziggy53ziggy53 Super Moderators Posts: 24,078 moderator
    edited August 4, 2010
    Richard wrote: »
    My (feeble) understanding is that is what deconvolution software attempts to do by estimating distortion introduced by optical imperfections in lenses, sensor noise, sloppy focus and in this case, camera motion.

    Image data convolution (which is also called by the simpler term "image blur") is typically caused by the things you describe, but the type of blur caused by defocus and the AA filter is quite different than that caused by motion. There are, therefore, completely different means of treating the different type of blur and the deconvolution algorithms are different for the different types of blur.

    The article in the first link of this thread deals only with the blur caused by "camera" motion, but "subject" motion blur may also be effectively treated (sometimes) by the same deconvolution algorithm, although obviously subject motion is not measured by the above sensors.

    I am not trying to trivialize the research and results from the above link, they are a significant development and we will ultimately see positive results from this sort of technology. It is a partial solution for the future automation of the digital reduction of one type of image blur.
    ziggy53
    Moderator of the Cameras and Accessories forums
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited August 4, 2010
    pathfinder wrote: »
    Neil, what you "see" is not what registers directly on your retina in a one to one ratio.

    Your retina is a source of image data, but your "vision" is really a construct of your occipital cortex and your consciousness. When we turn our head, the world seems stable around us, but just look at a video monitor as you spin a video camera - "vision" is not just like a camera image, despite the frequent statement that the eye is like a camera. That statement while true is also not an accurate description of human vision...

    Understand.

    My understanding is like this - if you are handwriting a note at your desk in your study you will have no trouble forming the letters to be legible. However, if you are writing that note on the seat of a 4WD travelling quickly over very rough terrain chances are your writing will be illegible no matter the amount of effort you make to make it not so. Now imagine that a geeky type says to you, "No worries, just come with me." He takes you to a lab and straps you into something like those machines astronauts train in which throw you around with the force of several Gs. The geek tells you to write your note while you and the machine are doing the motions. When you're done he takes you to a printer and shows you the note you have written, and you see it's even more legible than the copy you did at home at your desk. How come? Well, all the extreme movements of the astronaut machine are computer controlled, and the effects on your handwriting can be removed simply by subtracting those induced movements. Those movements are also so extreme that they cancel the comparatively imperceptible shaking of your hand when you are writing at your desk, so the result is even better than that. Those extreme movements would also very much attenuate the bumpy car ride, so a pretty fair copy could be got if you were in the machine in the car as it was travelling quickly over rough ground.

    This is something like what the visual system does. It introduces extreme noise into the perception of a signal which overwhelms the noise inherent in the package with the signal, and then reverse engineers its own noise contribution, and produces a sharpness of signal with the native noise much attenuated. One way it does this is by analysing and comparing multiple streams of the signal-noise package as it is filtered through it's own noise-producing processes. This concept is called redundancy of input.

    Pretty smart, and the opposite to what engineers are attempting in R's paper, which is to measure the noise native to the signal, and then compensate for it.

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • RichardRichard Administrators, Vanilla Admin Posts: 19,954 moderator
    edited August 4, 2010
    NeilL wrote: »
    Pretty smart, and the opposite to what engineers are attempting in R's paper, which is to measure the noise native to the signal, and then compensate for it.

    Neil

    Interesting. Besides being much more sophisticated, the brain has the advantage of having a time dimension, which is lacking (today) in still photography. Imagine a future technology in which exposure time is reduced to nanoseconds and each shutter click produces, say, 30 frames, bracketed by focal length, aperture and exposure. It would then be possible--using the peta-Hertz processor in your laptop--to algorithmically eliminate camera shake, subject motion and focus issues as well as to enable DOF adjustments in post. It would not be as elegant as human perception, but brute force can accomplish a lot if it is brute enough.
  • Dan7312Dan7312 Registered Users Posts: 1,330 Major grins
    edited August 4, 2010
    Interesting blog on the eye as a camera on B&H's blog

    http://photography.bhinsights.com/content/photographic-eye.html


    pathfinder wrote: »
    Neil, what you "see" is not what registers directly on your retina in a one to one ratio.

    Your retina is a source of image data, but your "vision" is really a construct of your occipital cortex and your consciousness. When we turn our head, the world seems stable around us, but just look at a video monitor as you spin a video camera - "vision" is not just like a camera image, despite the frequent statement that the eye is like a camera. That statement while true is also not an accurate description of human vision...
  • IcebearIcebear Registered Users Posts: 4,015 Major grins
    edited August 4, 2010
    I am in absolute awe of the knowledge available to me for free on this site. You guys, your understanding of these incredibly complex matters, your ability to put them into reasonably comprehensible language, and YOUR WILLINGNESS TO DO SO, humble me. Thank you so much. Again.
    John :
    Natural selection is responsible for every living thing that exists.
    D3s, D500, D5300, and way more glass than the wife knows about.
Sign In or Register to comment.