Lytro light field camera
They've got their order page up. It's a light field camera, explanation of the science here.
Focus after the fact. Order now, ships in the spring of 2012. Mac only for now, Windows promised.
Mine's been ordered!
Focus after the fact. Order now, ships in the spring of 2012. Mac only for now, Windows promised.
Mine's been ordered!
0
Comments
Or you could use hyperfocal and selective blur (eg on1 FocalPoint) with all the advantages of RAW and a real lens and camera!:D
Neil
http://www.behance.net/brosepix
Portfolio • Workshops • Facebook • Twitter
www.tednghiem.com
I wonder how that's going to work and whether it will preclude other, traditional post-processing. Anybody know?
It shouldn't preclude other post-processing.
It seems like it's just another step in RAW processing - firstly, set your depth of field and plane (or planes) of focus, along with white balance, noise reduction, etc., then convert to TIFF, then edit as normal.
This thing might seem like a toy now, but, given time to develop, it may have potential - with bigger sensors, better lenses, better microlenses, higher ISO, etc., there's no reason it can't eventually supplant standard sensors.
After all, it's basically a RAW file with extra information - vector, as well as colour and intensity...
DavidBroadwell.com, My Smugmug Home
Most likely, the new Field Camera captures depth of field...by grabbing and grouping focal points/planes/images and storing them as a data cloud. From looking at it, it resembles a movie camera that will zoom from near to far while capturing images that will be stacked and used in a data base. This allows you to recall different points, or frames actually, within the cloud as different points of focus. The desired effect appears to give the user the feeling that they are zooming through reality, focusing as they see fit. Marco photographers have already been stacking and flattening photos for better depth of field...they just don't have a way to hold all of the photos used in a data cloud so that you can selectively pick out focal planes while seeming throwing everything else out of focus as if by magic.
I see the field engine as using some idealized form of the morphing software that we've been seeing on TV and at the movies for a while. I also suspect that like the new 240hz TV's...it will create transitional frames between the actual frames it captures for a smoothing effect, if it's that advanced...and why not, that technology has been around for a couple of years now.
If you use a shallow depth of field for each image, and that's probably required, otherwise there wouldn't be a zooming effect...making you feel as if you are inside of the image cloud, you could look at the first image, then go deeper and see the second, third, and so on, all the while watching the subjects in the nearest image appear to go out of focus as if by magic...which they would at a large f-stop, if you were looking at the middle image in the focus stack. In actuality, you are just looking at a single image in a group that was captured or generated at f-2 from 10mm to 200mm (zoom range and aperture is my guess). I'm guessing that the morphing software would smooth the transition between the images and maybe add special effects as selected by the user.
This is just my guess from seeing the camera...now, I'll read the how-it-works paper and see how it really works.
Cool...sure...useful...maybe...desirable...for some.
As far as traditional post processing, why not. You should be able to grab a snapshot and put it in LR or CS.
After thought...
Seems to me that, that would make for huge files. I wonder what the limitations will be. I would bet that the current offering won't be using 25 mega-pixel imaging sensors...the files would be enormous . I'm interested in IQ...I wonder how they are dealing with this.
Just some food for thought...
Educate yourself like you'll live forever and live like you'll die tomorrow.
Ed
Very excellent assessment Ed! Very close I think on all points!
What this gadget produces, I expect, is a set of low res thumbnails (? ~2MP, has to be very low because of the sensor panoptic lenses specs and for speed of processing) which contain a sampling of sensor data from, say, 4 concentric sensor areas. This is the image data. What happens next is software image enhancement and interpolation, which is a big component of this gimmick, and freely admitted everywhere on the website, called 'doing with software what used to be done with hardware'.
So in effect what you might have for each image is a small set of stacked low res thumbnails with sw interpolation filling in the gaps. The image will likely already have had the life post processed out of it by the proprietary enhancing software, and if you tried to do any more to the little jpegs they would likely dissolve into a soup of artifacts. On top of that in order to do any processing yourself with an existing application the files have to be converted from a proprietary format into jpeg, and I shudder to think what that might do to IQ! That's the philosophy behind the tech - do all the IQ with software, the hardware just gets the samples of photons.
In order to get any usable image data at all, the conditions would likely have to be like they had to be at the beginning of photography - ideal light with everything dead still.
Be aware that many of the example pics on the site seem not to have been taken with the actual "camera" being sold, and the animations are possibly Flash.
The site openly admits many times over that what's being sold is a very handicapped experimental prototype, an attempt at proof of concept. So what you are paying for is really a lab mockup. They seem to be using it as a way to raise funds for further research.
I think all the aims of this project are already easily achievable with other methods which produce extremely better results. The concepts of this project are in a middle ground between optical photography and cgi. So your use of it will need to want such a result.
Neil
http://www.behance.net/brosepix
A rather full explanation of it here:
https://www.youtube.com/watch?v=WhBnzJUakpM
Dgrin FAQ | Me | Workshops
very very very revealing!!!
I lost count of the number of times they shot themselves in the foot, or had nothing to say at all!
"megarays of light"! really, dr (who?)!!!
"You don't need all those 20MP of the 5D2, they have no real use, they're overkill, a waste of technology. You only need resolution that's acceptable for sharing online." Not a direct quote of the dr, but an accurate summary of this point.
Neil
PS 20MP data off sensor = 5PMP + 5MP + 5MP + 5MP (sensor samples) = 2MP res in jpeg converted from lightfieldpictureformat or "lfp" (yes!) from the expert!
PPS what is that bit of plastic at the front doing for IQ??!! rofl
http://www.behance.net/brosepix
The site says the viewing engine is embedded in the image. If these images are intended for sharing online, as the guys in the video describe, then when they are posted how is this engine going to work for the viewer, in your terms?
Neil
http://www.behance.net/brosepix
It captures light at a constant f/2. I think that's more than consistent with current technology.
The camera has a zoom feature built into it that works as a normal zoom would.
The one thing I might agree on is whether this is for everyone. It's a cool camera and one that's certainly better than your camera phone. Probably approaching the quality of a G12. with regard to quality. Yeah, there's no control on the camera. No flash and no tripod mount (the later will be an accessory). It's unlikely that in its current form; it will do video either. But this is the first generation.
Adjusting focus is done either through the application or (according to the inventor) on the camera itself.
Will Lightroom or Photoshop be capable of dealing with the new LFP format (what Lytro is calling its file type)? Dunno. I bet you can output to a format that lets you do what ever massaging you do today in a tool you're comfortable using.
The term "megarays" was used to describe the number of light paths that hit the sensor. What would you call it? And LFP refers to the the file type. Again, it's necessary and descriptive of a file that is neither CR2, NEF or JPG.
I think your summation of the points made in the video are way off base. I don't believe they came close to saying anything like that. In fact, I saw quite the opposite. I believe their target market is the casual shooter who doesn't care about settings because they just want to push the button and make a nice picture.
Be careful what you ask for. It isn't a new technology.
http://www.raytrix.de/index.php/home.html
For more comment:
http://terragalleria.com/blog/2011/06/28/light-field-camera-from-lytro/
Dale B. Dalrymple
...with apology to Archimedies
your enthusiasm for the product is touching, and I hope it is as good as you believe it to be! I really do! I am a gear junkie to the core! my personal assessment is that it is a very handicapped very apologetic toy, with a novelty value that will wear off after 30min of parting with the best part of $500. sorry to say
the convention in file naming puts "format" as the last abbreviation, as in NEF = Nikon Electronic Image Format. following this convention I would have expected the abbreviation of the Lytro format to be lpf
there is no reality corresponding to "light rays", light is an energy field. individual photons can be captured, and are captured with conventional sensors. photons do not contain data about the relative position of the source to the sensor that is capturable with the technology in the Lytro lens. no one coordinate on the sensor defines any part of the energy field. what the Lytro lens appears to do is sample from the position of the lenses on the sensor, quite a different matter, and possible with a pinhole camera
Neil
http://www.behance.net/brosepix
no no no a thousand times!!
I would not spend a half hour researching my posts to this thread for the cheap thrill of being smarmy! OMG I am not that cheap, fellow!
Neil
http://www.behance.net/brosepix
Dunno, but:
and
Portfolio • Workshops • Facebook • Twitter
hope you don't translate my serious effort to tease out the facts from the sales pitch as an attempt to ban the device from anyone who wants to buy it. OMG my entire life would be consumed in futility!
in fact I have just ordered 5 of them!*
Neil
*well, a slight exaggeration
http://www.behance.net/brosepix
"You miss 100% of the shots you don't take" - Wayne Gretzky
Implementers of light field cameras describe the process differently, for example page 11 of:
http://www.raytrix.de/tl_files/downloads/Raytrix_Intro_LightField_100124_web.pdf
Dale B. Dalrymple
...with apology to Archimedies
thanks for this Dale! a great contribution to the discussion!
the basic structure is as I describe, namely
* sampling of sensor data ("subsets of light rays"!) by microlens location
* dof "reconstruction (from) virtual image points", in other words by software interpolation
* "effective" resolution is the number of microlenses, which in the case of the Lytro is likely nowhere near even a million! On the site you linked the best number I saw is 1MP. According to the site the Stanford array is 96 microlenses
The Lytro website, for all the slipping and sliding around questions asked by the public on it, is very clear - the Lytro uses a very simplified mockup in principal version of the panoptic concept, and produces images and manipulates them with software from very crude sensor data. My opinion is that it is really this software which is the main achievement of this product.
The development of chips to carry a score and several score of millions of light receptors in a miniature architecture, and to deal with the optical and electronic limitations in the physics, and to produce images of high quality, is still 35yr after Bayer's patent a work in progress. The tech of the plenoptic camera is imaginable and the algorithms are there to make it possible from a computing point of view - you can actually buy examples of this tech! However, while there will be applications for more developed specialised forms of this tech, eg in medical imaging where it might be huge, I am doubtful about the ability of it ever to compete with still photography tech of the type we now use, in cost, size and manoeuverability, performance, speed, versatility and image quality. Applications like the cameras we use have certain non-negotiable "economies" and standards which operate to limit the potential of the tech which could be applied to them. It is not a free-for-all for all comers.
I would be very happy to be wrong about the Lytro.
Neil
http://www.behance.net/brosepix
A full-frame sensor with the photosite density of the Canon G10's sensor would have almost 300 million photosites - and this isn't even close to the highest-density sensor. If these photosites were to be binned together to make, say, a 30MP image, it could be quite useable. Diffraction wouldn't be any more of an issue than for any other 30MP full-frame camera - there are still 30MP, just that, instead of having a single photosite behind each pixel, there are 10. Same goes for noise.
Naturally, this is still experimental technology at the moment, and the Lytro is more a proof-of-concept exercise than anything else.
yes, the issue of redundancy - binning - is on my mind too
there is no information that I have found on the optical characteristics of the microlenses in the Lytro, or their relationship with the electronics
Neil
http://www.behance.net/brosepix
http://www.dpreview.com/articles/1942514918/ces-2012-lytro-photowalk
Moderator of the Cameras and Accessories forums