Interesting new deblurring technology from Msft
http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/
OK, so it's only in the lab at the moment and is not something I would want to attach to my camera, but the results are pretty impressive. Give it a few years to shrink and improve...
OK, so it's only in the lab at the moment and is not something I would want to attach to my camera, but the results are pretty impressive. Give it a few years to shrink and improve...
0
Comments
Moderator of the Cameras and Accessories forums
If I understand the proposed system the image's anti-blur processing time would be deferred until outside the camera, i.e. post-processing with a computer and software. The image capture would be unaffected except for adding a new "tag" of data with the measured camera shake angle and camera shake velocity. The post-processing would simply plug the camera shake data into existing deconvolution software. (The "aided blind deconvolution" described in the article.)
I think we are many years away from putting deconvolution processing into a camera body.
Anyone can explore deconvolution now by using the simple forms of deconvolution found in the Photoshop "Smart" sharpen and RAWTherapee deconvolution sharpening. More advanced algorithms are available in "Image Analyzer" freeware (Windows). Of course, all of these require you to "guess" the image shake extent and direction, data that would be included in Richard's link.
http://logicnet.dk/Analyzer/
Moderator of the Cameras and Accessories forums
I got that. But you still have to record the "shake" data and somewhere in my quick scan of the paper, it was being calculated per-pixel (which seems odd). That's a lot of additional data.
I plan on reading the paper more thoroughly although I think the easiest way to steady a hand held camera might still be a gyroscope
I took a brief look at the full paper and it is mostly way beyond me. My impression though is that the additional data is only that which is supplied by the accelerometers, just a few numbers really. The per-pixel calculations are then done on the captured image using the motion parameters as an additional input to the algorithm. What I haven't understood yet is what they mean by "prior" image. Need more coffee.
The mathematical function that describes the conversion from a point to that irregular shape is called a Point Spread Function or PSF.
In theory, if you know precisely that PSF you should be able to reconstruct the original image by the means of mathematical deconvolution (up to a certain degree)
In practice, knowing the exact PSF is almost impossible, so a guess has to be made. There are commercial softwares that use deconvolution to restore small defocusing:
Focus magic
Focus fixer
Photoshop smart sharpen using lens blur
Lightroom 3 sharpness provided you use 100 in the detail slider
Deconvolution is used also in capture sharpening, to restore the image after the AA filter degradation
What I understand from the paper is that they are trying to model the PSF based on motion data, which should give better results than using a general guess
There is an excellent thread on the topic on the Luminous-Landscape forum here
Funny thing, I can see sharp tho I am moving, my head is bobbing, my eyes are sliding this way and that... seems like we are not noticing something that is very sharp in what nature does in our visual system.wink:D
And I think it's this, our vis sys *causes* blurring and therefore knows the exact nature of the blur to compensate for to get sharpness. On the other hand, the tech we have is chasing the horse after it has bolted, trying to calculate its trajectory from movements we can't control. What if we make gear that like our vis sys doesn't aim to calculate blur that is created by extraneous factors, but which creates blur to overwhelm those extraneous factors, with known blur which it itself creates, and therefore can precisely backwards engineer to sharpness.
Neil
http://www.behance.net/brosepix
Your retina is a source of image data, but your "vision" is really a construct of your occipital cortex and your consciousness. When we turn our head, the world seems stable around us, but just look at a video monitor as you spin a video camera - "vision" is not just like a camera image, despite the frequent statement that the eye is like a camera. That statement while true is also not an accurate description of human vision...
Moderator of the Technique Forum and Finishing School on Dgrin
My (feeble) understanding is that is what deconvolution software attempts to do by estimating distortion introduced by optical imperfections in lenses, sensor noise, sloppy focus and in this case, camera motion.
Image data convolution (which is also called by the simpler term "image blur") is typically caused by the things you describe, but the type of blur caused by defocus and the AA filter is quite different than that caused by motion. There are, therefore, completely different means of treating the different type of blur and the deconvolution algorithms are different for the different types of blur.
The article in the first link of this thread deals only with the blur caused by "camera" motion, but "subject" motion blur may also be effectively treated (sometimes) by the same deconvolution algorithm, although obviously subject motion is not measured by the above sensors.
I am not trying to trivialize the research and results from the above link, they are a significant development and we will ultimately see positive results from this sort of technology. It is a partial solution for the future automation of the digital reduction of one type of image blur.
Moderator of the Cameras and Accessories forums
Understand.
My understanding is like this - if you are handwriting a note at your desk in your study you will have no trouble forming the letters to be legible. However, if you are writing that note on the seat of a 4WD travelling quickly over very rough terrain chances are your writing will be illegible no matter the amount of effort you make to make it not so. Now imagine that a geeky type says to you, "No worries, just come with me." He takes you to a lab and straps you into something like those machines astronauts train in which throw you around with the force of several Gs. The geek tells you to write your note while you and the machine are doing the motions. When you're done he takes you to a printer and shows you the note you have written, and you see it's even more legible than the copy you did at home at your desk. How come? Well, all the extreme movements of the astronaut machine are computer controlled, and the effects on your handwriting can be removed simply by subtracting those induced movements. Those movements are also so extreme that they cancel the comparatively imperceptible shaking of your hand when you are writing at your desk, so the result is even better than that. Those extreme movements would also very much attenuate the bumpy car ride, so a pretty fair copy could be got if you were in the machine in the car as it was travelling quickly over rough ground.
This is something like what the visual system does. It introduces extreme noise into the perception of a signal which overwhelms the noise inherent in the package with the signal, and then reverse engineers its own noise contribution, and produces a sharpness of signal with the native noise much attenuated. One way it does this is by analysing and comparing multiple streams of the signal-noise package as it is filtered through it's own noise-producing processes. This concept is called redundancy of input.
Pretty smart, and the opposite to what engineers are attempting in R's paper, which is to measure the noise native to the signal, and then compensate for it.
Neil
http://www.behance.net/brosepix
Interesting. Besides being much more sophisticated, the brain has the advantage of having a time dimension, which is lacking (today) in still photography. Imagine a future technology in which exposure time is reduced to nanoseconds and each shutter click produces, say, 30 frames, bracketed by focal length, aperture and exposure. It would then be possible--using the peta-Hertz processor in your laptop--to algorithmically eliminate camera shake, subject motion and focus issues as well as to enable DOF adjustments in post. It would not be as elegant as human perception, but brute force can accomplish a lot if it is brute enough.
http://photography.bhinsights.com/content/photographic-eye.html
http://www.danalphotos.com
http://www.pluralsight.com
http://twitter.com/d114
Natural selection is responsible for every living thing that exists.
D3s, D500, D5300, and way more glass than the wife knows about.