Light Field camera - pretty amazing

eoren1eoren1 Registered Users Posts: 2,391 Major grins
edited July 27, 2011 in Cameras
Just saw this on Engadget and was amazed by the video demo. A ground-up re-imagining of how to capture light that fundamentally changes depth of field...after the shot is taken :huh

Link to video: http://video.allthingsd.com/video/camera-start-up-offers-a-whole-new-perspective/5B23C591-FEE6-4DED-8C15-281FC74542A5
«13

Comments

  • MJoliatMJoliat Registered Users Posts: 34 Big grins
    edited June 22, 2011
    OK. That is really cool! It will be interesting to see how they do this. Also what kind of response that then comes from the "big boys",
  • cab.in.bostoncab.in.boston Registered Users Posts: 634 Major grins
    edited June 22, 2011
    New(?) camera technology
    I've never heard of this before, but it seems interesting. The sample gallery is pretty neat. They seem very light on the technical details as well as pricing/availability, but it seems an interesting concept. Am I just out of it? Has this been hashed to death already?

    http://www.lytro.com/
    Father, husband, dog lover, engineer, Nikon shooter
    My site 365 Project
  • cab.in.bostoncab.in.boston Registered Users Posts: 634 Major grins
    edited June 22, 2011
    Funny... I just posted about this on the Camera forum as well. Looks interesting.
    Father, husband, dog lover, engineer, Nikon shooter
    My site 365 Project
  • SorinSorin Registered Users Posts: 29 Big grins
    edited June 22, 2011
    Someone just posted another thread this morning, so not hashed completely to death. Now I need to use your link and see all the details.
    "I know you don't understand. Let me show you......"
    My Website
    My 50 f/1.8 lives on my Nikon D80 full time.
    Next Lens: Tamron 28-75 f/2.8
  • racerracer Registered Users Posts: 333 Major grins
    edited June 22, 2011
    seems like a gimmick to me, and the so called science really isn't science, nor does it make a bit of sense when comparing it to real science. Look at the example images, and you will notice that you can only focus the photos at predetermined points, not just anywhere in the image, and you cant actually change the focus. Pretty much, it seems to be like taking a few pics at different focus points and simply combining the images, and letting you select the correct focused image. That dosnt seem to be anything "revolutionary", a poor focus will still be a poor focus, and your SOL if it didn't focus on one of the things you really wanted to focus on.
    This seems to be just another gimmick that will be thrown on a point and shoot, to try to help people taking snapshots, who really dont want to learn the basics, just want to snap pictures
    Todd - My Photos
  • FoquesFoques Registered Users Posts: 1,951 Major grins
    edited June 22, 2011
    racer wrote: »
    seems like a gimmick to me, and the so called science really isn't science, nor does it make a bit of sense when comparing it to real science. Look at the example images, and you will notice that you can only focus the photos at predetermined points, not just anywhere in the image, and you cant actually change the focus. Pretty much, it seems to be like taking a few pics at different focus points and simply combining the images, and letting you select the correct focused image. That dosnt seem to be anything "revolutionary", a poor focus will still be a poor focus, and your SOL if it didn't focus on one of the things you really wanted to focus on.
    This seems to be just another gimmick that will be thrown on a point and shoot, to try to help people taking snapshots, who really dont want to learn the basics, just want to snap pictures

    that is exactly what I was thinking, looking at "how it works" part. Besides, the sharpness it provides in the test images, I would be very unhappy with, too.
    Arseny - the too honest guy.
    My Site
    My Facebook
  • DoctorItDoctorIt Administrators Posts: 11,951 moderator
    edited June 22, 2011
    They seem very light on the technical details...
    Really?!

    There's a link to a full dissertation, up to Stanford University snuff, no less! Very interesting stuff there, and very advanced optics.

    Racer - did you read the details? Capturing the directionality of light at several loci is not a "gimmick" - the company is merely in its infancy. I'm guessing they're just getting started with some VC money.
    Erik
    moderator of: The Flea Market [ guidelines ]


  • DoctorItDoctorIt Administrators Posts: 11,951 moderator
    edited June 22, 2011
    I've gone ahead and merged the two threads, here in Wide Angle (in case anyone is wondering why the above reads a little wonky thumb.gif)
    Erik
    moderator of: The Flea Market [ guidelines ]


  • mmmattmmmatt Registered Users Posts: 1,347 Major grins
    edited June 22, 2011
    What's next? Light Field Camera
    Does something like this effect pro photogs? Is this kind of technology yet another excuse for people to not hire a pro? Will this change the way pros' shoot by taking away the exclusivity of fast glass/low dof shots? Will this technology be on our bodies soon?


    Hmmmm. I thought this may be some interesting discussion.


    Matt



    original article here



    cnn wrote:
    A camera that lets you focus after you've already taken the photo? And lets you focus anywhere within the image you want? That's got people talking and, according to its creator, is the start of "a picture revolution."
    Oh yeah -- it can also transfer a regular photo to 3-D.
    Lytro, a company launched Tuesday by 31-year-old entrepreneur Ren Ng, promises that camera will be released soon.
    "I am thrilled to finally draw back the curtain and introduce our new light field camera company, one that will forever change how everyone takes and experiences pictures," Ng wrote on the startup's blog. "Lytro's company launch is truly the start of a picture revolution."
    Apparently, some sharks are buying the buzz. Lytro has reportedly already raised $50 million in venture capital, an impressive feat for a company that just debuted.
    Ben Horowitz of investment firm Andreessen Horowitz wrote on his blog that Ng "walked into the firm and blew my brains to bits."
    "Light field research has been going on for some time, but Ren has figured out a way to fit the technology into your pocket," wrote Horowitz, whose company is an investor in Lytro.
    The gadget, which Lytro is calling a "light field camera," uses multiple internal lenses to capture much more light than a normal camera, at more than one angle.
    That, the company says, lets a photographer concentrate on merely framing a shot, or just clicking fast and furiously. Then he or she can use software to focus later, or even create multiple images with different focuses.
    Lytro calls the result "living pictures."
    Experiment with this focus software on Lytro's site.
    So far, people who have seen the product are impressed.
    "A Mountain View start-up is promising that its camera, due later this year, will bring the biggest change to photography since the transition from film to digital," writes Ina Fried for AllThingsD. "Ordinarily, I'm turned off by such hyperbole, but after having seen a demo from Lytro, that statement seems downright reasonable."
    The concept was the subject of Ng's doctoral thesis at Stanford in 2007. He says that, since then, he's been working to turn the concept into a practical consumer product.
    "What began in a lab at Stanford University has transformed into a world-class company, forty-four people strong, sparkling with talent, energy and inspiration," he wrote.
    The Lytro site doesn't show any images of the camera itself. The company isn't announcing a price, its number of megapixels or other technical details yet. People interested in possibly buying one when they become available can submit their e-mail addresses to get updates and reserve one if they choose to buy.
    The New York Times reports that the camera is due out later this year.
    My Smugmug site

    Bodies: Canon 5d mkII, 5d, 40d
    Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
    Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
  • ThatCanonGuyThatCanonGuy Registered Users Posts: 1,778 Major grins
    edited June 22, 2011
    I don't see how it's physically possible to have everything in focus unless you stop down to a tiny, tiny aperture like f32. If it uses a lens that's super-stopped-down and then throws some of it OOF with software, then I suppose that's possible.

    Then again, I don't know much about sensors and even less about software ne_nau.gif
  • CambysesCambyses Registered Users Posts: 141 Major grins
    edited June 22, 2011
    Cool Start-up on Light Field Camera
    Will the light field sensors replace the current ones in future DSLRs?

    http://www.pcmag.com/article2/0,2817,2387422,00.asp

    More:
    http://www.lytro.com/science_inside
  • DirquistDirquist Registered Users Posts: 81 Big grins
    edited June 22, 2011
    I don't see how it's physically possible to have everything in focus unless you stop down to a tiny, tiny aperture like f32. If it uses a lens that's super-stopped-down and then throws some of it OOF with software, then I suppose that's possible.

    Then again, I don't know much about sensors and even less about software ne_nau.gif

    I think the trick is that it uses multiple lenses. Either way, none of us are going to be able to understand it, we just have to wait and see. It sure sounds interesting though.
  • mmmattmmmatt Registered Users Posts: 1,347 Major grins
    edited June 22, 2011
    Dirquist wrote: »
    I think the trick is that it uses multiple lenses. Either way, none of us are going to be able to understand it, we just have to wait and see. It sure sounds interesting though.

    Right and there is no way to cram 3 or 4 slabs of L-glass onto the same body, but maybe it can be done in a dslr by having multiple sensors at different focal planes or angles? That would involve splitting the lens image with mirrors or whatever, cut down on light, but high iso technology is going nuts right now too... I wasn't a pro in the film days, but if this surfaces I will be feeling a little of what film pros felt when digital rebels became every day items.
    My Smugmug site

    Bodies: Canon 5d mkII, 5d, 40d
    Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
    Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
  • SimpsonBrothersSimpsonBrothers Registered Users Posts: 1,079 Major grins
    edited June 22, 2011
    That sensor will be awesome in a 5DMKIII ;)
  • CambysesCambyses Registered Users Posts: 141 Major grins
    edited June 22, 2011
    Oh, I didn't see this thread before I started another one on the same subject. In any case, this is based on the light field imaging technology and is well described in their founder's dissertation:

    http://www.lytro.com/renng-thesis.pdf
  • RichardRichard Administrators, Vanilla Admin Posts: 19,960 moderator
    edited June 22, 2011
    DoctorIt wrote: »
    There's a link to a full dissertation, up to Stanford University snuff, no less! Very interesting stuff there, and very advanced optics
    Thanks for the link, Doc. I took a brief look, and it's mostly way above my pay grade, but the writing is quite clear so I may spend some more time with it. It's an intriguing, elegant approach. I had rather imagined that someone would come up with some brute-force approach based on focus bracketing, much like exposure bracketing. This is really quite different. Of course, the devil is in the details, and we'll just have to wait and see how well real-world products perform.
  • ziggy53ziggy53 Super Moderators Posts: 24,119 moderator
    edited June 22, 2011
    I merged these 2 threads and changed the thread name.

    Carry on. thumb.gif
    ziggy53
    Moderator of the Cameras and Accessories forums
  • mmmattmmmatt Registered Users Posts: 1,347 Major grins
    edited June 22, 2011
    Thanks Ziggy, and no worries Cambyses. I should have used a more literate thread title.
    My Smugmug site

    Bodies: Canon 5d mkII, 5d, 40d
    Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
    Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
  • cab.in.bostoncab.in.boston Registered Users Posts: 634 Major grins
    edited June 22, 2011
    DoctorIt wrote: »
    Really?!

    There's a link to a full dissertation, up to Stanford University snuff, no less! Very interesting stuff there, and very advanced optics.

    Yes, the dissertation is linked, and I'm thumbing through it. The theory behind the technology is very cool, although most photographers aren't likely to delve into a 200 page dissertation to look for some stats on a new camera. But there are almost no details about this new camera itself on the company's website. I've read Thom Hogan's thoughts (front page today, will slide to the archive page in a few days or so), and he indicates it's probably something like a square format, roughly 600x600px image, and that the prototype needed a 16MP sensor to generate a 90kP image.

    I'm interested in what the proposed market is. Are they going after high end DSLRs? P&S? Is this going to be a $3k or $300 camera? Is it all about sharing these "live" photos online or do they think we'll be shooting with this, coming back to PP and generating the final image there rather than in-camera?

    My comment that it was light on detail was more because the website is "look at these neat pictures, click here, watch the focus change, and hey, register here to be notified when the camera is available." I'm intrigued enough to wonder more about the physical details of the camera, file management, image processing, price, etc. I understand it's a startup and they don't have product to ship yet, but if they're mature enough to tease the product, I'd just like to know more about it, that's all.
    Father, husband, dog lover, engineer, Nikon shooter
    My site 365 Project
  • RichardRichard Administrators, Vanilla Admin Posts: 19,960 moderator
    edited June 22, 2011
    OK, so now I have merged the two already merged threads into one. Let's hope nobody starts another. lol3.gif
  • RichardRichard Administrators, Vanilla Admin Posts: 19,960 moderator
    edited June 22, 2011
    I don't see how it's physically possible to have everything in focus unless you stop down to a tiny, tiny aperture like f32. If it uses a lens that's super-stopped-down and then throws some of it OOF with software, then I suppose that's possible.

    Then again, I don't know much about sensors and even less about software ne_nau.gif
    The approach is based on an entirely different computational strategy than that of current cameras. It also requires different hardware, especially many more mpxls in the sensor (though not necessarily a larger one) and different micro-lenses in front of it. Potentially, it can correct lens diffraction issues as well as DOF and focus problems. But so far, we don't know much about the tradeoffs in other aspects of image quality. Judging from the tremendous amount of noise in the press today, I'd venture to guess that the initial product offering will be directed at low-end consumer products. But given time and resources, the technology could prove significant for professional photography as well.
  • ThatCanonGuyThatCanonGuy Registered Users Posts: 1,778 Major grins
    edited June 22, 2011
    Remember the Invisible Camera and the digital film cartridge?

    mwink.gif

    Edit:

    Oh, and my favorite, the Nikon Nikkorner product line. wings.gif
  • mmmattmmmatt Registered Users Posts: 1,347 Major grins
    edited June 22, 2011
    DoctorIt wrote: »
    I've gone ahead and merged the two threads, here in Wide Angle (in case anyone is wondering why the above reads a little wonky thumb.gif)

    that is 3 threads merged now Ha!
    My Smugmug site

    Bodies: Canon 5d mkII, 5d, 40d
    Lenses: 24-70 f2.8L, 70-200 f4.0L, 135 f2L, 85 f1.8, 50 1.8, 100 f2.8 macro, Tamron 28-105 f2.8
    Flash: 2x 580 exII, Canon ST-E2, 2x Pocket Wizard flexTT5, and some lower end studio strobes
  • lifeinfocuslifeinfocus Registered Users Posts: 1,461 Major grins
    edited June 23, 2011
    One article mentioned that Adobe is working on this technology too. In my opinion, this can move more output to online digital and less to print. Of course, the initial camera is reported to be only point and shot, but I would anticipate this to migrate to more expensive pro level cameras. While I have only been working with digital photography for a few years, I spent many years working in IT, including research, and I think this will be a major step in digital online photography.
    Phil
    http://www.PhilsImaging.com
    "You don't take a photograph, you make it." ~Ansel Adams
    Phil
  • craig_dcraig_d Registered Users Posts: 911 Major grins
    edited June 23, 2011
    Yeesh, so many comments so quickly, and so little information, and so much of it wrong!

    No, the lightfield camera does not take multiple pictures. Those of you who noticed that the Flash demo only had a few planes of focus are correct, but that's because whoever put the Flash demo together chose only to generate a few images from the lightfield data. The thing to remember here is that the raw lightfield data is not an image, it's data from which images can be generated, just as the RAW files from your camera today aren't really images either. Imagine if someone put together a demo showing how a RAW image can be adjusted for white balance after the fact. They might generate a Flash demo where you can click to make the image hotter or cooler, but they'd probably just generate a few samples at different settings and let you choose between them in the demo. This doesn't mean that there are only a few white balance settings you can use on a RAW file, it just means the demo is simplified to get the point across without taking too long to build or download.

    The huge, huge problem with the lightfield camera is that the resolution of the images it generates is VASTLY lower than the original sensor. You could expect a 16 MP lightfield camera to give you final images of about 0.16 MP -- no kidding! So for now, with today's sensor technology, even in medium format, you can't get even a 6 MP image out of this technology. The processing time is also rather excessive, especially if you're trying to do it in-camera.

    It is also not true that this camera doesn't require focusing. It can produce results only within a range of apertures; at the wide end it's limited by the original taking lens, and the other end is limited by the accuracy of the angle information in the lightfield data, and then you lose some additional angle resolution in computation. Realistically you can't get below f/32, and if you were shooting your camera today at f/32 you'd still have to focus. Therefore you still have to focus the lightfield camera too.

    I think, as others have noted (see Ken Rockwell's comments yesterday), that this is basically another phenomenon like the Foveon sensor -- an idea that sounds cool at first but turns out not to be all that useful or cost-effective in practice. It's a great way for the inventor to burn through millions of dollars of someone else's money and build a reputation for himself as a tech innovator without actually ever producing a successful product.
    http://craigd.smugmug.com

    Got bored with digital and went back to film.
  • roakeyroakey Registered Users Posts: 81 Big grins
    edited June 23, 2011
    Ok, I’ve done a quick scan of the dissertation and think I can come up with a layman’s description of what’s at the heart of the process. Before any of you think I’m NOT a layman, think again, I only have a BS, and that’s what you might be getting here! :) What’s going on here is basically ray tracing, but backwards.

    First some quick conclusions: In terms of resolution, rewind back to the kilopixel days. There’s going to have to be a lot of innovations before you start seeing the amount of resolution WE demand from pictures. 10 to 100 gigapixel sensors, the ability to move that amount of data quickly and the ability to store the amount of information required. Assuming Moore’s law is still valid, we’re talking 5 years or more.

    My analogy, for what it’s worth:

    Just to amp up the size to make things easier, imagine a large view camera with a focusing plate in place. Any point on the ground glass surface shows 1 piece of information, the light that came from the one object that’s focused at that one point. Our lenses go to great lengths to focus all the light from that one object at that one point. For this example let’s say that this object’s a marble and we have an 8x10 view camera.

    Now take the entire view camera away, lenses and all except the frame that holds the ground glass. Now take away the ground glass, leaving a frame hanging in mid air. Put your eye anywhere within the plane of the ground glass and look around – you can see the entire marble and more. Move your eye to another position (still within the 8x10 frame) – you can still see the entire marble, but from a slightly different perspective. Leaving your eye in one place you can look around – you can look at light coming from different directions (perhaps other marbles behind your main subject). Imagine you could capture not only the light that falls on that one spot where your eye is (like the ground glass of the focusing screen) but also the DIRECTIONALITY of the light – where it was coming from (similar to you looking around). Every point within the picture would have ALL the required information to reconstruct the picture from that one point’s perspective!

    This “every point has all the information” is why you can get that 3D moving effect. On the far right of the frame you can see a bit more of one side of the marble, and from the left, the other.

    There’s a practical limit to the angle of incidence of the light that you can record, so to tweak the analogy move your eye back about a foot behind the 8x10 frame and move your eye around within an 8x10 area projected back from the frame. And you get some idea about what’s happening.

    Knowing the color and angle of the light and from what direction it came from, you can reconstruct any image at any distance. This isn’t a multiple lens or multiple picture trick.

    Continued...
    [email]roakeyatunderctekdotcom[/email]
    <== Mighty Murphy, the wonder Bouv!
  • roakeyroakey Registered Users Posts: 81 Big grins
    edited June 23, 2011
    ...continued (hit a character limit)

    But the problem is to be perfect, you have to capture a infinite number of angles of light on a zero-sized area. Can’t be done, so how much sampling can you pack in the smallest area possible? Backing your eye off a foot behind the frame limited the angle of incidence of light (only the light that came through the 8x10 frame is of interest) but that’s still a lot (well, for the math types out there, a fraction of infinity is still infinity :-)) of information to gather. So you start by making compromises, sampling only a finite number of angles which is driven by how dense the sensor is.

    So my prediction is that the original cameras will have large sensors, on the order of 20meg, but will only produce pictures in the kilopixel range. SLR price, kilopixel resolution.

    That’s what I was able to glean form a quick scan of the dissertation, feel free to correct me because I could be really off base here!

    Roak

    Ps. So many replies while I was writing this!
    [email]roakeyatunderctekdotcom[/email]
    <== Mighty Murphy, the wonder Bouv!
  • RichardRichard Administrators, Vanilla Admin Posts: 19,960 moderator
    edited June 23, 2011
    A few things to keep in mind. The thesis was submitted five years ago and the project involved building a prototype with what were probably very limited funds. It is reasonable to assume that Ng has made progress in refining and optimizing his algorithms and possibly hardware as well. For legal and commercial reasons, he's not going to be discussing that much for now. There's a big gap between having big ideas and creating a successful company, but it has been known to happen at Stanford from time to time lol3.gif. I think we need to just sit tight and see how this develops. From what we think we know, it's probably safe to assume that pros aren't going to be interested in the first products, but that doesn't mean that we'll never be able to take advantage of the technology. Wasn't digital photography too crude to be taken seriously at first?
  • ziggy53ziggy53 Super Moderators Posts: 24,119 moderator
    edited June 23, 2011
    While this video is pretty old, I believe it graphically depicts the "Light Field" process and basic premise used in the Lytro camera design:

    ziggy53
    Moderator of the Cameras and Accessories forums
  • DoctorItDoctorIt Administrators Posts: 11,951 moderator
    edited June 23, 2011
    craig_d wrote: »
    Yeesh, so many comments so quickly, and so little information, and so much of it wrong!

    No, the lightfield camera does not take multiple pictures...
    Welcome to the internet! lol3.gif


    Good post, Craig thumb.gif
    Erik
    moderator of: The Flea Market [ guidelines ]


Sign In or Register to comment.