Light Field camera - pretty amazing
Just saw this on Engadget and was amazed by the video demo. A ground-up re-imagining of how to capture light that fundamentally changes depth of field...after the shot is taken :huh
Link to video: http://video.allthingsd.com/video/camera-start-up-offers-a-whole-new-perspective/5B23C591-FEE6-4DED-8C15-281FC74542A5
Link to video: http://video.allthingsd.com/video/camera-start-up-offers-a-whole-new-perspective/5B23C591-FEE6-4DED-8C15-281FC74542A5
Eyal
My site | Non-MHD Landscapes |Google+ | Twitter | Facebook | Smugmug photos
My site | Non-MHD Landscapes |Google+ | Twitter | Facebook | Smugmug photos
0
Comments
I've never heard of this before, but it seems interesting. The sample gallery is pretty neat. They seem very light on the technical details as well as pricing/availability, but it seems an interesting concept. Am I just out of it? Has this been hashed to death already?
http://www.lytro.com/
My site 365 Project
My site 365 Project
My Website
My 50 f/1.8 lives on my Nikon D80 full time.
Next Lens: Tamron 28-75 f/2.8
This seems to be just another gimmick that will be thrown on a point and shoot, to try to help people taking snapshots, who really dont want to learn the basics, just want to snap pictures
that is exactly what I was thinking, looking at "how it works" part. Besides, the sharpness it provides in the test images, I would be very unhappy with, too.
My Site
My Facebook
There's a link to a full dissertation, up to Stanford University snuff, no less! Very interesting stuff there, and very advanced optics.
Racer - did you read the details? Capturing the directionality of light at several loci is not a "gimmick" - the company is merely in its infancy. I'm guessing they're just getting started with some VC money.
moderator of: The Flea Market [ guidelines ]
moderator of: The Flea Market [ guidelines ]
Does something like this effect pro photogs? Is this kind of technology yet another excuse for people to not hire a pro? Will this change the way pros' shoot by taking away the exclusivity of fast glass/low dof shots? Will this technology be on our bodies soon?
Hmmmm. I thought this may be some interesting discussion.
Matt
original article here
Bodies: Canon 5d mkII, 5d, 40d
Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
Then again, I don't know much about sensors and even less about software
Will the light field sensors replace the current ones in future DSLRs?
http://www.pcmag.com/article2/0,2817,2387422,00.asp
More:
http://www.lytro.com/science_inside
I think the trick is that it uses multiple lenses. Either way, none of us are going to be able to understand it, we just have to wait and see. It sure sounds interesting though.
Right and there is no way to cram 3 or 4 slabs of L-glass onto the same body, but maybe it can be done in a dslr by having multiple sensors at different focal planes or angles? That would involve splitting the lens image with mirrors or whatever, cut down on light, but high iso technology is going nuts right now too... I wasn't a pro in the film days, but if this surfaces I will be feeling a little of what film pros felt when digital rebels became every day items.
Bodies: Canon 5d mkII, 5d, 40d
Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
Smugger for life!
Most Popular Photos
http://www.lytro.com/renng-thesis.pdf
Carry on.
Moderator of the Cameras and Accessories forums
Bodies: Canon 5d mkII, 5d, 40d
Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
Yes, the dissertation is linked, and I'm thumbing through it. The theory behind the technology is very cool, although most photographers aren't likely to delve into a 200 page dissertation to look for some stats on a new camera. But there are almost no details about this new camera itself on the company's website. I've read Thom Hogan's thoughts (front page today, will slide to the archive page in a few days or so), and he indicates it's probably something like a square format, roughly 600x600px image, and that the prototype needed a 16MP sensor to generate a 90kP image.
I'm interested in what the proposed market is. Are they going after high end DSLRs? P&S? Is this going to be a $3k or $300 camera? Is it all about sharing these "live" photos online or do they think we'll be shooting with this, coming back to PP and generating the final image there rather than in-camera?
My comment that it was light on detail was more because the website is "look at these neat pictures, click here, watch the focus change, and hey, register here to be notified when the camera is available." I'm intrigued enough to wonder more about the physical details of the camera, file management, image processing, price, etc. I understand it's a startup and they don't have product to ship yet, but if they're mature enough to tease the product, I'd just like to know more about it, that's all.
My site 365 Project
Edit:
Oh, and my favorite, the Nikon Nikkorner product line.
that is 3 threads merged now Ha!
Bodies: Canon 5d mkII, 5d, 40d
Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
Phil
"You don't take a photograph, you make it." ~Ansel Adams
Phil
No, the lightfield camera does not take multiple pictures. Those of you who noticed that the Flash demo only had a few planes of focus are correct, but that's because whoever put the Flash demo together chose only to generate a few images from the lightfield data. The thing to remember here is that the raw lightfield data is not an image, it's data from which images can be generated, just as the RAW files from your camera today aren't really images either. Imagine if someone put together a demo showing how a RAW image can be adjusted for white balance after the fact. They might generate a Flash demo where you can click to make the image hotter or cooler, but they'd probably just generate a few samples at different settings and let you choose between them in the demo. This doesn't mean that there are only a few white balance settings you can use on a RAW file, it just means the demo is simplified to get the point across without taking too long to build or download.
The huge, huge problem with the lightfield camera is that the resolution of the images it generates is VASTLY lower than the original sensor. You could expect a 16 MP lightfield camera to give you final images of about 0.16 MP -- no kidding! So for now, with today's sensor technology, even in medium format, you can't get even a 6 MP image out of this technology. The processing time is also rather excessive, especially if you're trying to do it in-camera.
It is also not true that this camera doesn't require focusing. It can produce results only within a range of apertures; at the wide end it's limited by the original taking lens, and the other end is limited by the accuracy of the angle information in the lightfield data, and then you lose some additional angle resolution in computation. Realistically you can't get below f/32, and if you were shooting your camera today at f/32 you'd still have to focus. Therefore you still have to focus the lightfield camera too.
I think, as others have noted (see Ken Rockwell's comments yesterday), that this is basically another phenomenon like the Foveon sensor -- an idea that sounds cool at first but turns out not to be all that useful or cost-effective in practice. It's a great way for the inventor to burn through millions of dollars of someone else's money and build a reputation for himself as a tech innovator without actually ever producing a successful product.
Got bored with digital and went back to film.
First some quick conclusions: In terms of resolution, rewind back to the kilopixel days. There’s going to have to be a lot of innovations before you start seeing the amount of resolution WE demand from pictures. 10 to 100 gigapixel sensors, the ability to move that amount of data quickly and the ability to store the amount of information required. Assuming Moore’s law is still valid, we’re talking 5 years or more.
My analogy, for what it’s worth:
Just to amp up the size to make things easier, imagine a large view camera with a focusing plate in place. Any point on the ground glass surface shows 1 piece of information, the light that came from the one object that’s focused at that one point. Our lenses go to great lengths to focus all the light from that one object at that one point. For this example let’s say that this object’s a marble and we have an 8x10 view camera.
Now take the entire view camera away, lenses and all except the frame that holds the ground glass. Now take away the ground glass, leaving a frame hanging in mid air. Put your eye anywhere within the plane of the ground glass and look around – you can see the entire marble and more. Move your eye to another position (still within the 8x10 frame) – you can still see the entire marble, but from a slightly different perspective. Leaving your eye in one place you can look around – you can look at light coming from different directions (perhaps other marbles behind your main subject). Imagine you could capture not only the light that falls on that one spot where your eye is (like the ground glass of the focusing screen) but also the DIRECTIONALITY of the light – where it was coming from (similar to you looking around). Every point within the picture would have ALL the required information to reconstruct the picture from that one point’s perspective!
This “every point has all the information” is why you can get that 3D moving effect. On the far right of the frame you can see a bit more of one side of the marble, and from the left, the other.
There’s a practical limit to the angle of incidence of the light that you can record, so to tweak the analogy move your eye back about a foot behind the 8x10 frame and move your eye around within an 8x10 area projected back from the frame. And you get some idea about what’s happening.
Knowing the color and angle of the light and from what direction it came from, you can reconstruct any image at any distance. This isn’t a multiple lens or multiple picture trick.
Continued...
<== Mighty Murphy, the wonder Bouv!
But the problem is to be perfect, you have to capture a infinite number of angles of light on a zero-sized area. Can’t be done, so how much sampling can you pack in the smallest area possible? Backing your eye off a foot behind the frame limited the angle of incidence of light (only the light that came through the 8x10 frame is of interest) but that’s still a lot (well, for the math types out there, a fraction of infinity is still infinity :-)) of information to gather. So you start by making compromises, sampling only a finite number of angles which is driven by how dense the sensor is.
So my prediction is that the original cameras will have large sensors, on the order of 20meg, but will only produce pictures in the kilopixel range. SLR price, kilopixel resolution.
That’s what I was able to glean form a quick scan of the dissertation, feel free to correct me because I could be really off base here!
Roak
Ps. So many replies while I was writing this!
<== Mighty Murphy, the wonder Bouv!
Moderator of the Cameras and Accessories forums
Good post, Craig
moderator of: The Flea Market [ guidelines ]