Lytro light field camera

DavidTODavidTO Registered Users, Retired Mod Posts: 19,160 Major grins
edited June 2, 2012 in Cameras
They've got their order page up. It's a light field camera, explanation of the science here.

Focus after the fact. Order now, ships in the spring of 2012. Mac only for now, Windows promised.

Mine's been ordered! :D
Moderator Emeritus
Dgrin FAQ | Me | Workshops
«13

Comments

  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 19, 2011
    DavidTO wrote: »
    They've got their order page up. It's a light field camera, explanation of the science here.

    Focus after the fact. Order now, ships in the spring of 2012. Mac only for now, Windows promised.

    Mine's been ordered! :D

    Or you could use hyperfocal and selective blur (eg on1 FocalPoint) with all the advantages of RAW and a real lens and camera!:D

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • ian408ian408 Administrators Posts: 21,948 moderator
    edited October 20, 2011
    It's also worth reading the summary of how it works and if you're interested, Ren Ng's PhD dissertation.
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • AndyAndy Registered Users Posts: 50,016 Major grins
    edited October 20, 2011
    Ordered.
  • Moogle PepperMoogle Pepper Registered Users Posts: 2,950 Major grins
    edited October 20, 2011
    Ordering one myself. It looks like a fun addition for wedding blog entries.
    Food & Culture.
    www.tednghiem.com
  • RichardRichard Administrators, Vanilla Admin Posts: 19,967 moderator
    edited October 20, 2011
    Here's an intriguing quote from the Science Inside page:
    The Light Field Engine travels with every living picture as it is shared, letting you refocus pictures right on the camera, on your desktop and online.

    I wonder how that's going to work and whether it will preclude other, traditional post-processing. Anybody know?
  • shadowbladeshadowblade Registered Users Posts: 23 Big grins
    edited October 20, 2011
    Richard wrote: »
    Here's an intriguing quote from the Science Inside page:



    I wonder how that's going to work and whether it will preclude other, traditional post-processing. Anybody know?

    It shouldn't preclude other post-processing.

    It seems like it's just another step in RAW processing - firstly, set your depth of field and plane (or planes) of focus, along with white balance, noise reduction, etc., then convert to TIFF, then edit as normal.

    This thing might seem like a toy now, but, given time to develop, it may have potential - with bigger sensors, better lenses, better microlenses, higher ISO, etc., there's no reason it can't eventually supplant standard sensors.

    After all, it's basically a RAW file with extra information - vector, as well as colour and intensity...
    http://www.imperialstudios.biz - Imperial Studios - Landscape, Travel and Fine Art Photography. Also happens to be my website, a work very much in progress... prints available here if anyone wants my work.
  • AmazingGrace0385AmazingGrace0385 Registered Users Posts: 31 Big grins
    edited October 20, 2011
    Whata cool new toy! :) If they can integrate it into an SLR, I'd be all over it!
  • PupWebPupWeb Registered Users Posts: 166 Major grins
    edited October 20, 2011
    This technology is probably going to be the norm in some form in the future. Is anybody else impressed by f2 aperture through the entire focal length on that little lens?
  • rhommelrhommel Registered Users Posts: 306 Major grins
    edited October 20, 2011
    this is a nice toy, that's for sure.
  • Ed911Ed911 Registered Users Posts: 1,306 Major grins
    edited October 20, 2011
    Richard wrote: »
    Here's an intriguing quote from the Science Inside page:
    I wonder how that's going to work and whether it will preclude other, traditional post-processing. Anybody know?



    Most likely, the new Field Camera captures depth of field...by grabbing and grouping focal points/planes/images and storing them as a data cloud. From looking at it, it resembles a movie camera that will zoom from near to far while capturing images that will be stacked and used in a data base. This allows you to recall different points, or frames actually, within the cloud as different points of focus. The desired effect appears to give the user the feeling that they are zooming through reality, focusing as they see fit. Marco photographers have already been stacking and flattening photos for better depth of field...they just don't have a way to hold all of the photos used in a data cloud so that you can selectively pick out focal planes while seeming throwing everything else out of focus as if by magic.

    I see the field engine as using some idealized form of the morphing software that we've been seeing on TV and at the movies for a while. I also suspect that like the new 240hz TV's...it will create transitional frames between the actual frames it captures for a smoothing effect, if it's that advanced...and why not, that technology has been around for a couple of years now.

    If you use a shallow depth of field for each image, and that's probably required, otherwise there wouldn't be a zooming effect...making you feel as if you are inside of the image cloud, you could look at the first image, then go deeper and see the second, third, and so on, all the while watching the subjects in the nearest image appear to go out of focus as if by magic...which they would at a large f-stop, if you were looking at the middle image in the focus stack. In actuality, you are just looking at a single image in a group that was captured or generated at f-2 from 10mm to 200mm (zoom range and aperture is my guess). I'm guessing that the morphing software would smooth the transition between the images and maybe add special effects as selected by the user.


    This is just my guess from seeing the camera...now, I'll read the how-it-works paper and see how it really works.

    Cool...sure...useful...maybe...desirable...for some.

    As far as traditional post processing, why not. You should be able to grab a snapshot and put it in LR or CS.

    After thought...

    Seems to me that, that would make for huge files. I wonder what the limitations will be. I would bet that the current offering won't be using 25 mega-pixel imaging sensors...the files would be enormous . I'm interested in IQ...I wonder how they are dealing with this.

    Just some food for thought...
    Remember, no one may want you to take pictures, but they all want to see them.
    Educate yourself like you'll live forever and live like you'll die tomorrow.

    Ed
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 20, 2011
    Ed911 wrote: »
    Most likely, the new Field Camera captures depth of field...by grabbing and grouping focal points/planes/images and storing them as a data cloud. From looking at it, it resembles a movie camera that will zoom from near to far while capturing images that will be stacked and used in a data base. This allows you to recall different points, or frames actually, within the cloud as different points of focus. The desired effect appears to give the user the feeling that they are zooming through reality, focusing as they see fit. Marco photographers have already been stacking and flattening photos for better depth of field...they just don't have a way to hold all of the photos used in a data cloud so that you can selectively pick out focal planes while seeming throwing everything else out of focus as if by magic.

    I see the field engine as using some idealized form of the morphing software that we've been seeing on TV and at the movies for a while. I also suspect that like the new 240hz TV's...it will create transitional frames between the actual frames it captures for a smoothing effect, if it's that advanced...and why not, that technology has been around for a couple of years now.

    If you use a shallow depth of field for each image, and that's probably required, otherwise there wouldn't be a zooming effect...making you feel as if you are inside of the image cloud, you could look at the first image, then go deeper and see the second, third, and so on, all the while watching the subjects in the nearest image appear to go out of focus as if by magic...which they would at a large f-stop, if you were looking at the middle image in the focus stack. In actuality, you are just looking at a single image in a group that was captured or generated at f-2 from 10mm to 200mm (zoom range and aperture is my guess). I'm guessing that the morphing software would smooth the transition between the images and maybe add special effects as selected by the user.


    This is just my guess from seeing the camera...now, I'll read the how-it-works paper and see how it really works.

    Cool...sure...useful...maybe...desirable...for some.

    As far as traditional post processing, why not. You should be able to grab a snapshot and put it in LR or CS.

    After thought...

    Seems to me that, that would make for huge files. I wonder what the limitations will be. I would bet that the current offering won't be using 25 mega-pixel imaging sensors...the files would be enormous . I'm interested in IQ...I wonder how they are dealing with this.

    Just some food for thought...

    Very excellent assessment Ed! Very close I think on all points! thumb.gif

    What this gadget produces, I expect, is a set of low res thumbnails (? ~2MP, has to be very low because of the sensor panoptic lenses specs and for speed of processing) which contain a sampling of sensor data from, say, 4 concentric sensor areas. This is the image data. What happens next is software image enhancement and interpolation, which is a big component of this gimmick, and freely admitted everywhere on the website, called 'doing with software what used to be done with hardware'.

    So in effect what you might have for each image is a small set of stacked low res thumbnails with sw interpolation filling in the gaps. The image will likely already have had the life post processed out of it by the proprietary enhancing software, and if you tried to do any more to the little jpegs they would likely dissolve into a soup of artifacts. On top of that in order to do any processing yourself with an existing application the files have to be converted from a proprietary format into jpeg, and I shudder to think what that might do to IQ! That's the philosophy behind the tech - do all the IQ with software, the hardware just gets the samples of photons.

    In order to get any usable image data at all, the conditions would likely have to be like they had to be at the beginning of photography - ideal light with everything dead still.

    Be aware that many of the example pics on the site seem not to have been taken with the actual "camera" being sold, and the animations are possibly Flash.

    The site openly admits many times over that what's being sold is a very handicapped experimental prototype, an attempt at proof of concept. So what you are paying for is really a lab mockup. They seem to be using it as a way to raise funds for further research.

    I think all the aims of this project are already easily achievable with other methods which produce extremely better results. The concepts of this project are in a middle ground between optical photography and cgi. So your use of it will need to want such a result.

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • DavidTODavidTO Registered Users, Retired Mod Posts: 19,160 Major grins
    edited October 20, 2011
    Wow, that's a lot of conjecture going on there! When you post it's Flash and HTML5.

    A rather full explanation of it here:


    https://www.youtube.com/watch?v=WhBnzJUakpM
    Moderator Emeritus
    Dgrin FAQ | Me | Workshops
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 20, 2011
    DavidTO wrote: »
    Wow, that's a lot of conjecture going on there! When you post it's Flash and HTML5.

    very very very revealing!!!

    I lost count of the number of times they shot themselves in the foot, or had nothing to say at all!

    "megarays of light"! really, dr (who?)!!! rolleyes1.gif

    "You don't need all those 20MP of the 5D2, they have no real use, they're overkill, a waste of technology. You only need resolution that's acceptable for sharing online." Not a direct quote of the dr, but an accurate summary of this point.

    Neil

    PS 20MP data off sensor = 5PMP + 5MP + 5MP + 5MP (sensor samples) = 2MP res in jpeg converted from lightfieldpictureformat or "lfp" (yes!) from the expert!

    PPS what is that bit of plastic at the front doing for IQ??!! eek7.gifrofl
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • divamumdivamum Registered Users Posts: 9,021 Major grins
    edited October 20, 2011
    Playing with the sample images is alarmingly cool! Can't wait to hear about/see some real-world reports once it's out there and folks are using it.
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 20, 2011
    DavidTO wrote: »
    When you post it's Flash and HTML5.

    The site says the viewing engine is embedded in the image. If these images are intended for sharing online, as the guys in the video describe, then when they are posted how is this engine going to work for the viewer, in your terms?

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • ian408ian408 Administrators Posts: 21,948 moderator
    edited October 20, 2011
    Neil, Ed, I don't think either of you could be farther from the truth on this one. Especially with statements like
    Neil wrote:
    In order to get any usable image data at all, the conditions would likely have to be like they had to be at the beginning of photography - ideal light with everything dead still.

    It captures light at a constant f/2. I think that's more than consistent with current technology.
    Ed wrote:
    The desired effect appears to give the user the feeling that they are zooming through reality, focusing as they see fit.

    The camera has a zoom feature built into it that works as a normal zoom would.

    The one thing I might agree on is whether this is for everyone. It's a cool camera and one that's certainly better than your camera phone. Probably approaching the quality of a G12. with regard to quality. Yeah, there's no control on the camera. No flash and no tripod mount (the later will be an accessory). It's unlikely that in its current form; it will do video either. But this is the first generation.

    Adjusting focus is done either through the application or (according to the inventor) on the camera itself.

    Will Lightroom or Photoshop be capable of dealing with the new LFP format (what Lytro is calling its file type)? Dunno. I bet you can output to a format that lets you do what ever massaging you do today in a tool you're comfortable using.
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • ian408ian408 Administrators Posts: 21,948 moderator
    edited October 20, 2011
    I suppose if they'd have come off as slick and with all the answers, you'd call them snake oil salesmen?

    The term "megarays" was used to describe the number of light paths that hit the sensor. What would you call it? And LFP refers to the the file type. Again, it's necessary and descriptive of a file that is neither CR2, NEF or JPG.

    I think your summation of the points made in the video are way off base. I don't believe they came close to saying anything like that. In fact, I saw quite the opposite. I believe their target market is the casual shooter who doesn't care about settings because they just want to push the button and make a nice picture.


    NeilL wrote: »
    very very very revealing!!!

    I lost count of the number of times they shot themselves in the foot, or had nothing to say at all!

    "megarays of light"! really, dr (who?)!!! rolleyes1.gif

    "You don't need all those 20MP of the 5D2, they have no real use, they're overkill, a waste of technology. You only need resolution that's acceptable for sharing online." Not a direct quote of the dr, but an accurate summary of this point.

    Neil

    PS 20MP data off sensor = 5PMP + 5MP + 5MP + 5MP (sensor samples) = 2MP res in jpeg converted from lightfieldpictureformat or "lfp" (yes!) from the expert!

    PPS what is that bit of plastic at the front doing for IQ??!! eek7.gifrofl
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • martinjp2martinjp2 Registered Users Posts: 29 Big grins
    edited October 20, 2011
    Very cool! It would be great if you could pick a primary focus point with associated depth of field then pick one or more other planes with only those planes in focus without associated depth of field adjustments then output to TIFF. Say to get both eyes of the baby in the example in focus. Sure you could create multiple images in Photoshop to get this effect, but it would be nice if their software did it. Wish I had the money to buy one, maybe next year.
  • dbddbd Registered Users Posts: 216 Major grins
    edited October 20, 2011
    Whata cool new toy! :) If they can integrate it into an SLR, I'd be all over it!

    Be careful what you ask for. It isn't a new technology.

    http://www.raytrix.de/index.php/home.html

    For more comment:

    http://terragalleria.com/blog/2011/06/28/light-field-camera-from-lytro/

    Dale B. Dalrymple
    "Give me a lens long enough and a place to stand and I can image the earth."
    ...with apology to Archimedies
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 20, 2011
    ian408 wrote: »
    I suppose if they'd have come off as slick and with all the answers, you'd call them snake oil salesmen?

    The term "megarays" was used to describe the number of light paths that hit the sensor. What would you call it? And LFP refers to the the file type. Again, it's necessary and descriptive of a file that is neither CR2, NEF or JPG.

    I think your summation of the points made in the video are way off base. I don't believe they came close to saying anything like that. In fact, I saw quite the opposite. I believe their target market is the casual shooter who doesn't care about settings because they just want to push the button and make a nice picture.

    your enthusiasm for the product is touching, and I hope it is as good as you believe it to be! I really do! I am a gear junkie to the core! my personal assessment is that it is a very handicapped very apologetic toy, with a novelty value that will wear off after 30min of parting with the best part of $500. sorry to say

    the convention in file naming puts "format" as the last abbreviation, as in NEF = Nikon Electronic Image Format. following this convention I would have expected the abbreviation of the Lytro format to be lpf

    there is no reality corresponding to "light rays", light is an energy field. individual photons can be captured, and are captured with conventional sensors. photons do not contain data about the relative position of the source to the sensor that is capturable with the technology in the Lytro lens. no one coordinate on the sensor defines any part of the energy field. what the Lytro lens appears to do is sample from the position of the lenses on the sensor, quite a different matter, and possible with a pinhole camera

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • ian408ian408 Administrators Posts: 21,948 moderator
    edited October 20, 2011
    Now you're just being smarmy.
    NeilL wrote: »
    your enthusiasm for the product is touching, and I hope it is as good as you believe it to be! I really do! I am a gear junkie to the core! my personal assessment is that it is a very handicapped very apologetic toy, with a novelty value that will wear off after 30min of parting with the best part of $500. sorry to say

    the convention in file naming puts "format" as the last abbreviation, as in NEF = Nikon Electronic Image Format. following this convention I would have expected the abbreviation of the Lytro format to be lpf

    there is no reality corresponding to "light rays", light is an energy field

    Neil
    Moderator Journeys/Sports/Big Picture :: Need some help with dgrin?
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 20, 2011
    ian408 wrote: »
    Now you're just being smarmy.

    no no no a thousand times!!

    I would not spend a half hour researching my posts to this thread for the cheap thrill of being smarmy! OMG I am not that cheap, fellow!

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • AndyAndy Registered Users Posts: 50,016 Major grins
    edited October 20, 2011
    NeilL wrote: »
    If these images are intended for sharing online, as the guys in the video describe, then when they are posted how is this engine going to work for the viewer, in your terms?

    Neil

    Dunno, but:

    20111021-tg4accy2apjed86yah2i7ura5r.jpg

    and

    20111021-tahc1ms4i6wy7nnpdppj1wtnbh.jpg
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 20, 2011
    Andy wrote: »
    Dunno, but:

    20111021-tg4accy2apjed86yah2i7ura5r.jpg

    and

    20111021-tahc1ms4i6wy7nnpdppj1wtnbh.jpg

    hope you don't translate my serious effort to tease out the facts from the sales pitch as an attempt to ban the device from anyone who wants to buy it. OMG my entire life would be consumed in futility!

    in fact I have just ordered 5 of them!*

    Neil

    *well, a slight exaggeration
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • David_S85David_S85 Administrators Posts: 13,249 moderator
    edited October 20, 2011
    I haven't ordered a one yet, but I am following this story and thread with interest. :lurk
    My Smugmug
    "You miss 100% of the shots you don't take" - Wayne Gretzky
  • dbddbd Registered Users Posts: 216 Major grins
    edited October 21, 2011
    NeilL wrote: »
    there is no reality corresponding to "light rays", light is an energy field. individual photons can be captured, and are captured with conventional sensors. photons do not contain data about the relative position of the source to the sensor that is capturable with the technology in the Lytro lens. no one coordinate on the sensor defines any part of the energy field. what the Lytro lens appears to do is sample from the position of the lenses on the sensor, quite a different matter, and possible with a pinhole camera

    Neil

    Implementers of light field cameras describe the process differently, for example page 11 of:
    http://www.raytrix.de/tl_files/downloads/Raytrix_Intro_LightField_100124_web.pdf

    Dale B. Dalrymple
    "Give me a lens long enough and a place to stand and I can image the earth."
    ...with apology to Archimedies
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 21, 2011
    dbd wrote: »
    Implementers of light field cameras describe the process differently, for example page 11 of:
    http://www.raytrix.de/tl_files/downloads/Raytrix_Intro_LightField_100124_web.pdf

    Dale B. Dalrymple

    thanks for this Dale! a great contribution to the discussion!

    the basic structure is as I describe, namely

    * sampling of sensor data ("subsets of light rays"!eek7.gif) by microlens location

    * dof "reconstruction (from) virtual image points", in other words by software interpolation

    * "effective" resolution is the number of microlenses, which in the case of the Lytro is likely nowhere near even a million! On the site you linked the best number I saw is 1MP. According to the site the Stanford array is 96 microlenses

    The Lytro website, for all the slipping and sliding around questions asked by the public on it, is very clear - the Lytro uses a very simplified mockup in principal version of the panoptic concept, and produces images and manipulates them with software from very crude sensor data. My opinion is that it is really this software which is the main achievement of this product.

    The development of chips to carry a score and several score of millions of light receptors in a miniature architecture, and to deal with the optical and electronic limitations in the physics, and to produce images of high quality, is still 35yr after Bayer's patent a work in progress. The tech of the plenoptic camera is imaginable and the algorithms are there to make it possible from a computing point of view - you can actually buy examples of this tech!mwink.gif However, while there will be applications for more developed specialised forms of this tech, eg in medical imaging where it might be huge, I am doubtful about the ability of it ever to compete with still photography tech of the type we now use, in cost, size and manoeuverability, performance, speed, versatility and image quality. Applications like the cameras we use have certain non-negotiable "economies" and standards which operate to limit the potential of the tech which could be applied to them. It is not a free-for-all for all comers.

    I would be very happy to be wrong about the Lytro.mwink.gif

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • shadowbladeshadowblade Registered Users Posts: 23 Big grins
    edited October 21, 2011
    What I'd like to know is how many photosites are binned together to make a single pixel? 10? 20? 40?

    A full-frame sensor with the photosite density of the Canon G10's sensor would have almost 300 million photosites - and this isn't even close to the highest-density sensor. If these photosites were to be binned together to make, say, a 30MP image, it could be quite useable. Diffraction wouldn't be any more of an issue than for any other 30MP full-frame camera - there are still 30MP, just that, instead of having a single photosite behind each pixel, there are 10. Same goes for noise.

    Naturally, this is still experimental technology at the moment, and the Lytro is more a proof-of-concept exercise than anything else.
    http://www.imperialstudios.biz - Imperial Studios - Landscape, Travel and Fine Art Photography. Also happens to be my website, a work very much in progress... prints available here if anyone wants my work.
  • NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited October 21, 2011
    What I'd like to know is how many photosites are binned together to make a single pixel? 10? 20? 40?

    A full-frame sensor with the photosite density of the Canon G10's sensor would have almost 300 million photosites - and this isn't even close to the highest-density sensor. If these photosites were to be binned together to make, say, a 30MP image, it could be quite useable. Diffraction wouldn't be any more of an issue than for any other 30MP full-frame camera - there are still 30MP, just that, instead of having a single photosite behind each pixel, there are 10. Same goes for noise.

    Naturally, this is still experimental technology at the moment, and the Lytro is more a proof-of-concept exercise than anything else.

    yes, the issue of redundancy - binning - is on my mind too

    there is no information that I have found on the optical characteristics of the microlenses in the Lytro, or their relationship with the electronics

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • ziggy53ziggy53 Super Moderators Posts: 24,156 moderator
    edited January 13, 2012
    A brief additional glimpse about the Lytro Light Field camera and operation from DPReview:

    http://www.dpreview.com/articles/1942514918/ces-2012-lytro-photowalk
    ziggy53
    Moderator of the Cameras and Accessories forums
Sign In or Register to comment.