Options

Light Field camera - pretty amazing

2

Comments

  • Options
    craig_dcraig_d Registered Users Posts: 911 Major grins
    edited June 23, 2011
    Richard wrote: »
    A few things to keep in mind. The thesis was submitted five years ago and the project involved building a prototype with what were probably very limited funds. It is reasonable to assume that Ng has made progress in refining and optimizing his algorithms and possibly hardware as well. For legal and commercial reasons, he's not going to be discussing that much for now. There's a big gap between having big ideas and creating a successful company, but it has been known to happen at Stanford from time to time lol3.gif. I think we need to just sit tight and see how this develops. From what we think we know, it's probably safe to assume that pros aren't going to be interested in the first products, but that doesn't mean that we'll never be able to take advantage of the technology. Wasn't digital photography too crude to be taken seriously at first?

    Richard, I don't think that's really the issue. The real problem is that no matter how much the technology improves (and there are physical limits involved that have nothing to do with spending more money or making better algorithms), it's a high cost for a minor benefit. As photographers we learn to pre-visualize our shots. We should know what we want before we release the shutter. Therefore we should already have the aperture and focus set properly. If the ability to fix focus afterward would really make a huge difference to us, then we should work on our shooting technique rather than asking technology to make up for our ineptitude.

    I shoot manual focus probably 90+% of the time. Sure, every once in a while I get an OOF shot. Being able to fix that in PP might be nice. But am I willing to trade off 90% of my resolution to get it? Of course not. How much would the technology have to improve to make that trade-off worthwhile? Remember that with Lytro's technology, if you reduce the number of input pixels used to generate one output pixel, you reduce your sensitivity to light angles and therefore you reduce focus accuracy, that is, you reduce the sharpness of the final image. So the less resolution you give up, the worse your final image quality will be. You can't fix this with code any more than you can break the inverse relationship between DOF and aperture.

    Someone may someday come up with a radically different way to accomplish Lytro's goals that is not subject to the same limitations, but that solution will not be based on the ideas in Ng's Stanford thesis.
    http://craigd.smugmug.com

    Got bored with digital and went back to film.
  • Options
    RichardRichard Administrators, Vanilla Admin Posts: 19,928 moderator
    edited June 23, 2011
    Craig,

    Mostly, I agree with what you're saying, though I think you may be being short sighted about how many pixels affordable sensors can potentially have 10 years from now. I usually get the focus I want and know how to control depth of field reasonably well. I don't always get it right, of course, and I can think of one case in which I would have died for a second chance in post. lol3.gif. I am actually a little more interested in the possibility of reducing the lens diffraction problem, which seems to be intractable using current methods. But there are many more questions than answers here: what are the tradeoffs? Will noise be less or more of a problem? Will it really give more dynamic range? Less? Will we need new lenses? And then there's the whole ecosystem to consider: how will we process photos? Seems like it would necessarily be incompatible with today's raw converters. Will Adobe license it and build it into ACR/LR somehow? What will come of the inevitable patent war and who will be left standing? Perhaps it will emerge as a niche product, affordable only to the best-heeled ad agencies and Vogue cover pros. Who knows? I certainly don't. But I very much like the idea of pursuing radically new approaches, as opposed to the incrementalism that is so common among the major vendors. Are you really looking forward to the 70D? rolleyes1.gif
  • Options
    craig_dcraig_d Registered Users Posts: 911 Major grins
    edited June 23, 2011
    Richard, you're right that it implies a very different system. The lightfield data is obviously very different from the contents of a conventional RAW file. My guess is that in PP you would choose focus distance and DOF first, then process the resulting pre-image in a manner similar to the RAW processing that we're familiar with. It doesn't sound like current lenses would be compatible.

    As for dynamic range, there are some interesting possibilities. On the one hand, smaller sensor sites result in better angle information (I think), but at the same time, as we already know, smaller sites mean worse noise at the pixel level. This might be compensated for naturally, since the process of integrating information from multiple sites to produce a single output pixel may have a binning effect that averages out the noise. We'll have to wait and see.

    I'm all in favor of radical innovation, but in this case I see a lot of hype being driven by a startup company that wants to impress their investors and the press by claiming to have something revolutionary that will relegate the established camera companies to the dustbin of history. Naturally they aren't going to point out the weaknesses of their own technology. I also wonder how much this technology is really needed. As I said above, being able to fix a focusing error after the fact is nice, but not at the cost of 90% of my resolution. Even if at some point in the future I will be able to buy a camera with a 100 MP full-frame sensor, would I really want to reduce it to 1 MP, or even 10, just to get a refocusing capability that I would need only occasionally? I doubt it. Notice that the Foveon sensor hasn't taken over the world either. It's a cool idea, but most people just don't seem to see a need for it.

    I'm not looking forward to the 70D, but then again I'm not much interested in crop-frame sensors, and most of the time I'd rather shoot film. I'm curious about the 5D Mark III, 1Ds Mark IV, and Nikon D700x/D800/whatever that many of us expect to be announced sometime within the next year or so. I may not actually buy any of them, but I'll be interested to see what the specs and reviews look like.
    http://craigd.smugmug.com

    Got bored with digital and went back to film.
  • Options
    RichardRichard Administrators, Vanilla Admin Posts: 19,928 moderator
    edited June 24, 2011
    craig_d wrote: »
    I'm all in favor of radical innovation, but in this case I see a lot of hype being driven by a startup company that wants to impress their investors and the press by claiming to have something revolutionary that will relegate the established camera companies to the dustbin of history. Naturally they aren't going to point out the weaknesses of their own technology. I also wonder how much this technology is really needed. As I said above, being able to fix a focusing error after the fact is nice, but not at the cost of 90% of my resolution. Even if at some point in the future I will be able to buy a camera with a 100 MP full-frame sensor, would I really want to reduce it to 1 MP, or even 10, just to get a refocusing capability that I would need only occasionally? I doubt it. Notice that the Foveon sensor hasn't taken over the world either. It's a cool idea, but most people just don't seem to see a need for it.

    Gee, hype from a startup--I'm shocked, shocked! lol3.gif Seriously, we don't know what their real business plan is. They may have little interest in actually developing pro-quality equipment themselves but rather are looking to get bought out by one of the major players. Or license the technology to several. In the meantime, I would think that there's probably a market for cool "3D" crappy pics among the millions of users of crappy cell phone cameras. All this is sheer speculation, of course.

    I'm no expert in optics--and maybe I have this completely wrong--but I believe that real spatial resolution is now limited by diffraction, not megapixels. We usually shoot at the lens's sweet spot, not f/22 for best results. Ray Maxwell made an interesting post on Luminous Landscape about this a couple of years ago. So the mythical 100 Mpx sensor in a 35mm form factor isn't going to give you much more useable resolution than, say, a 5DII or D700 using the current approach. But light field claims to have a way around this problem, and that's the part that I find most interesting. As you say, focus and DOF tweaking after the fact doesn't seem worth it, but if it could also improve overall IQ significantly, that's a different story. We'll see.
  • Options
    CynthiaMCynthiaM Registered Users Posts: 364 Major grins
    edited June 24, 2011
  • Options
    ziggy53ziggy53 Super Moderators Posts: 23,815 moderator
    edited June 24, 2011
    Threads merged.

    Wow, this technology is garnishing some strong interest. I hope people aren't disappointed.
    ziggy53
    Moderator of the Cameras and Accessories forums
  • Options
    puzzledpaulpuzzledpaul Registered Users Posts: 1,621 Major grins
    edited June 24, 2011
    For any 'Blade Runner' fans - I feel a slight sense of deja vu sneaking up on me, re the 'photo analysis' scene :)

    pp
  • Options
    HurtNoMoreHurtNoMore Registered Users Posts: 7 Beginner grinner
    edited June 24, 2011
    There is no free lunch.

    The amount of storage/processing power/sensor size needed for anything resembling pro quality prints would be absurd given current tech.

    At the end of day these cameras will always sacrifice quality for degrees of freedom in post production.

    Until processing power and noise become non issues, this tech will remain a gimmick.
  • Options
    dlacouturedlacouture Registered Users Posts: 40 Big grins
    edited June 25, 2011
    From what I read some years ago, you'll only be able to start from your lens' actual physical aperture and then "close" it.
    But, from what I remember, the focus is really done in post-processing, not by the lens...
    In fact, this sensor now records, for each point on it, the intensity and direction of incident rays coming to this point.
    So you're able to reconstruct the whole light rays coming from the lens, and this gives you the ability to mathematically reconstruct any focus and aperture you want.
    Say you want a greater DoF? Then don't take into account the peripheral photosites under each microlenses, those recording rays coming from the more oblique angles. You'll be simulating a closed aperture this way.

    Another cool effect is that you will be able to selectively "focus mask" your image in PP... Shoot a portrait at f/1.4, and mask the person with a f/5.6 DoF, without any fake-looking blur filter...
    Also, there is no need for those costly tilt lenses anymore, by the way... Snap, then adjust your focus plane in PP...

    Or play with dual focus planes...
  • Options
    craig_dcraig_d Registered Users Posts: 911 Major grins
    edited June 25, 2011
    dlacouture wrote: »
    From what I read some years ago, you'll only be able to start from your lens' actual physical aperture and then "close" it.

    Right so far. You can't simulate a wider aperture than you actually shot with.
    But, from what I remember, the focus is really done in post-processing, not by the lens...

    Not entirely true, for reasons we will cover in a moment...
    In fact, this sensor now records, for each point on it, the intensity and direction of incident rays coming to this point.
    So you're able to reconstruct the whole light rays coming from the lens, and this gives you the ability to mathematically reconstruct any focus and aperture you want.
    Say you want a greater DoF? Then don't take into account the peripheral photosites under each microlenses, those recording rays coming from the more oblique angles. You'll be simulating a closed aperture this way.

    I believe that is the theory, but in practice there are limitations. For one thing, if you simulate a smaller aperture by ignoring oblique rays, then just like a real aperture, you are discarding light and therefore you need a longer exposure. Except you actually shot with a wider aperture and your exposure was calculated for that... so you end up with a badly underexposed image that has to be boosted in PP, and just as with ordinary underexposed images, the noise gets amplified too. And we're potentially talking about a LOT of underexposure. If you shoot at f/4 and then generate an f/22 image, you're throwing away five stops of light. Have you ever tried underexposing a shot by five stops and then boosting it in PP?

    Also, in the real world nothing is ever measured perfectly, but only within some reasonable margin of error. This means, among other things, that the angles of light measured by the Lytro camera will not be perfect. This will place a practical limit on how far you can artificially stop down and still produce a sharp image. One might think of this as being sort of analogous to optical diffraction -- the more DOF you want, the softer your image gets, at least beyond a certain point. Other people who understand the technology better than I do have said that you can't really stop down beyond about f/28. And because you can't really stop down arbitrarily far, I think you still have to focus the lens on something reasonably close to what you want. I don't think shooting is completely focus-free.
    Another cool effect is that you will be able to selectively "focus mask" your image in PP... Shoot a portrait at f/1.4, and mask the person with a f/5.6 DoF, without any fake-looking blur filter...
    Also, there is no need for those costly tilt lenses anymore, by the way... Snap, then adjust your focus plane in PP...

    Or play with dual focus planes...

    Yes, these are intriguing possibilities. But since the loss of resolution seems to be on the order of 1:100 (measured in MP), I think I'd rather pay for a tilt-shift lens.

    Ultimately, there is no free lunch. Whatever clever capabilities this technology offers will come at some cost compared to conventional cameras. Once there is an actual product on the market, we can each decide for ourselves if the trade-off is worth it.
    http://craigd.smugmug.com

    Got bored with digital and went back to film.
  • Options
    dlacouturedlacouture Registered Users Posts: 40 Big grins
    edited June 25, 2011
    Sure, practical cameras won't be on the market for years yet...
    But that's what we said about CCD sensors not so long ago... Then BSI sensors... Then usable 51200 ISO...
    Pretty soon, those 100MP sensors needed by these Lytro camera will be churning out of factories...

    By 2020, we could see some interesting camera models...
  • Options
    NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited June 26, 2011
    The whole enterprise is irredeemable hocus pocus!

    Light data cannot be extracted back beyond the lens. The light data before entering the lens is
    fundamentally different to the light data passed on by the optics and electronics. What you are
    getting with this latter data is only indirectly (inferred) information about the light source, but it's directly about the optics and
    electronics. The original light data, having once been "interpreted" by the optics and electronics is now being
    reinterpreted by these guys by more optics and more electronics. What you get in the end is not a product of
    analysis but of synthesis. You might as well just produce the data you want completely synthetically
    with a graphics program. You would get fundamentally the same type of data, and a much better
    result, not to mention in a much easier and cheaper way. To get where these sadly wishfully
    misconceiving folk want to get is best done with the marriage of "real" and artificial data, the latter
    not being more of the mangled "real" data, but rather newly and completely digitally created data.

    There is the future of this idea.

    Their "invention" was already invented way back in Aristotle's time. You couldn't get a patent for it
    now. Here it is - on the wall here is a wall-sized photograph of a scene with great depth which has extremely
    high resolution and colour fidelity and is sharply focused throughout... and it cost next to nothing to produce:

    i-gvwvsgB.jpg

    (c) Abelardo Morell

    :D

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • Options
    david-lowdavid-low Registered Users Posts: 752 Major grins
    edited June 26, 2011
    As Richard said, there are more speculation than answers. Here's my percieved thoughts:

    1) I can now use their power zoom P&S (if they were to produce one) for birdie. I think there are plenty snap and shoot photographers that shun huge DSLR would like to have a hand in it.

    2) With the possibility of 3D imaging, there is huge audience out there (young and old) to be enticed

    3) It was speculated the final image file could be small, thus serious amateur or professional will not give up the resolution. That is correct. But my wild guess is P&S camera outsell DSLR anytime. I'm sure there is statistic out there but I'm lazy to check it out but don't think I'm far from wrong. These happy go lucky snappy shooters don't give a dame on resolution and they are the one that consumed the market. As long it has a decent resolution on their laptop screen, that is good enough.

    Just looking from my angle.
  • Options
    NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited June 27, 2011
    NeilL wrote: »
    The whole enterprise is irredeemable hocus pocus!

    Light data cannot be extracted back beyond the lens. The light data before entering the lens is
    fundamentally different to the light data passed on by the optics and electronics. What you are
    getting with this latter data is only indirectly (inferred) information about the light source, but it's directly about the optics and
    electronics. The original light data, having once been "interpreted" by the optics and electronics is now being
    reinterpreted by these guys by more optics and more electronics. What you get in the end is not a product of
    analysis but of synthesis. You might as well just produce the data you want completely synthetically
    with a graphics program. You would get fundamentally the same type of data, and a much better
    result, not to mention in a much easier and cheaper way. To get where these sadly wishfully
    misconceiving folk want to get is best done with the marriage of "real" and artificial data, the latter
    not being more of the mangled "real" data, but rather newly and completely digitally created data.

    There is the future of this idea.

    Their "invention" was already invented way back in Aristotle's time. You couldn't get a patent for it
    now. Here it is - on the wall here is a wall-sized photograph of a scene with great depth which has extremely
    high resolution and colour fidelity and is sharply focused throughout... and it cost next to nothing to produce:

    i-gvwvsgB.jpg

    (c) Abelardo Morell

    :D

    Neil

    The image above provides focus information for all places in it for the sensor, in this case a dummy sensor, the wall. For an intelligent sensor, distance and direction information could be extracted from it and the sensor could with the right algorithms (simulating optics) serve up selective dof. It seems to me that since dof is information about the optics and not the light, it cannot be extracted from the original light before an image is created, but must and can only be manufactured from focus information for all places in an image produced without optics, ie an image in focus in all places like here.

    Visible light is a slice of the electromagnetic energy spectrum, it is not like a material mechanical thing, a chunk of something. It seems to me that for what those guys are imagining, light would have to be transformed into as much of a chunk as possible - one frequency, one polarity. What Renng calls a "light ray". ne_nau.gif So as well as being underexposed and grainy, the product from these guys' device would be blue, maybe, and flat.

    When white light enters an optics-electronics system the entire spectrum of light energy has to be dealt with, and that involves a multitude of mutually exclusive information. Since the light field camera does not in fact exist, where do the photographs in Renng's thesis come from? Not, I think, from real light but from digital bits in a computer graphics program, on which his algorithms have been run.

    When it comes to dof blur, there is the issue of synthesising it, since blur and its characteristics are due to the specific optics in the system, and that information is not in the light itself. Renng seems to show how an oof area of an image can be given back definition by resampling the data from a microlens, but if this is to be done the data from that microlens must be plenoptic for that part of the field. If all microlenses are thus plenoptic for their individual parts of the total field, his "world focal plane", where does the blur come from?

    The aha! part of Renng's thesis is this:

    The main lesson that I have learned through my research is that significant parts of the
    physical process of making photographs can be executed faithfully in software. In particular,
    the problems associated with optical focus are not fundamental characteristics of photog-
    raphy. The solution advanced in this dissertation is to record the light field flowing into
    conventional photographs, and to use the computer to control the final convergence of rays
    in our images. This new kind of photography means unprecedented capabilities after expo-
    sure: refocusing, choosing a new depth of field, and correcting lens aberrations.


    So, this idea is not about taking photographs, but about collecting data using which a graphics program can simulate photographs. You would go out with this fantasy machine once in your career, point it at everyone and everything, and then go back and spend the rest of your life in front of the computer manipulating the data to produce your life's work.clap.gifwink:D

    An image like the one above produced without optics has far more information integrity, far and away cleaner, giving the electronics a chance to operate on it as native light. At the moment I really don't see how Renng's virtual impossible technology is an advantage.

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • Options
    NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited June 27, 2011
    Am I wrong? I hope so, 'cause the science fictionist in me really loves Renng. What a trip his thesis is!!:Dbowdown.gifrolleyes

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • Options
    NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited June 27, 2011
    In case some people might not realise, the image of the Brooklyn Bridge on the bedroom wall is produced by a hole - it's a pinhole image.

    It even turns into a splendid light show at night! Renng, eat your heart out!iloveyou.gif Peace!

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • Options
    ziggy53ziggy53 Super Moderators Posts: 23,815 moderator
    edited June 27, 2011
    NeilL wrote: »
    ... So, this idea is not about taking photographs, but about collecting data using which a graphics program can simulate photographs. You would go out with this fantasy machine once in your career, point it at everyone and everything, and then go back and spend the rest of your life in front of the computer manipulating the data to produce your life's work.clap.gifwink:D

    An image like the one above produced without optics has far more information integrity, far and away cleaner, giving the electronics a chance to operate on it as native light. At the moment I really don't see how Renng's virtual impossible technology is an advantage.

    Neil

    NeilL wrote: »
    Am I wrong? I hope so, 'cause the science fictionist in me really loves Renng. What a trip his thesis is!!:Dbowdown.gifrolleyes

    Neil

    To reply to your second section first; Yes, you are mostly wrong.

    Many seem to think that digital photography in its current form is the only way to produce an image from digital acquisition of image data. I assure you that the methodology of creating the final image should not be a consideration of the final image, only whether or not the final image is relatively faithful to the original scene as our eyes/mind perceived it. The consideration of the final, viewable image should stand on its own

    For instance, some tried to infer that digital photography itself was only a "simulation" of an image and scene. True, you cannot "see" the latent image as captured with a digital camera until and unless it is first "interpreted" by software. A digitally acquired image has little overall technology similarity with analog/film technology and everything captured at the image plane is from a technologically dissimilar process.

    Similarly, a film hologram has no technological similarity to traditional photography and yet both processes can produce images. Just because holography uses scattered light (as opposed to focused light in a traditional camera) and then a process of interference and diffraction patterns for the actual recording on film, and then because the process (generally) requires collimated light for viewing, I submit does not matter if you wish to call the resulting product a type of "photograph", because a hologram really "is" a type of photograph and holography really is a branch of photography. The reason I can say this is because the resulting image is a fairly faithful viewable representation of the original scene.

    A digital Bayer imager does not record either complete color nor complete luminance information at each photosite. Those attributes have to be interpreted and statistically generated in the demosaicing algorithm. In effect, all of what you see in a Bayer imager based digital camera is "synthesized" or "generated", yet the process seems to now be commonly accepted as a type of photography and the output from the process is generally accepted for photographs.

    Infrared (IR) imagers and film can record scenes which we cannot even see with our limited human eyesight. Does that mean that we need to exclude IR products from the realm of photography and photographs. (Indeed I had a person try to tell me that last year when they saw some of my IR work and they were confused how I could generate viewable images from a wavelength that they could not "see" otherwise. They told me the resulting prints and computer images were not "photographs", just because "they" were ignorant of the process. rolleyes1.gif)

    I submit that if the process, whatever it might be, faithfully records a scene to a recognizable degree, and if the process does so without human, or other carbon-based intelligent organism, placed picture elements (aka "painting"), then that process, no matter what you call the process, is necessarily a form of "photography" in its basic definition:
    Merriam-Webster (current online) definition:
    "The art or process of producing images by the action of radiant energy and especially light on a sensitive surface (as film or a CCD chip)"

    Webster 1913 definition:
    "The science which relates to the action of light on sensitive bodies in the production of pictures, the fixation of images, and the like"

    Since the "Light Field" camera and process seems to meet both of the above accepted definitions, I submit that we should have no issues accepting this new process as a form of photography and we should have issues accepting the resulting viewable images as photographs.
    ziggy53
    Moderator of the Cameras and Accessories forums
  • Options
    RichardRichard Administrators, Vanilla Admin Posts: 19,928 moderator
    edited June 27, 2011
    NeilL wrote: »
    Am I wrong? I hope so, 'cause the science fictionist in me really loves Renng. What a trip his thesis is!!:Dbowdown.gifrolleyes

    Neil
    Ziggy said it all better than I ever could--no surprise there. The main thing to remember is that the photos we produce with today's digital cameras are the results of processing data which has been collected from tiny photocells with tiny microlenses/filters in front of them. Light field changes the microlenses and the processing algorithms. The photons really don't give a damn. mwink.gif What the images look like remains to be seen.
  • Options
    NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited June 27, 2011
    Ziggy & Richard

    On the general level of course I agree! And bring on the future!

    However, when you bite into the nitty gritty of Renng's thesis you might find, I think, as I have done, that his hardware technology achieves no more in providing data for a sensor than a pinhole camera, and is unable to handle actual light, eg on each of those panoptic microlenses there will be all kinds of aberrations which, along with various compromises, will make the data unusable by the sensor to produce a competitive image. Then there is the issue of him applying his algorithms, not to real light data, but to bits in a graphics program, to illustrate his process. Then there is the contradiction about where blur comes from in his images, since there is apparently no blur information coming from the panoptic microlenses. He says that the adjustments will be produced synthetically. Well, we already have software to create dof blur. By contrast, aren't holograms the result of analysis, that is of real data rather than manufactured?

    His images remain to be seen. What they look like is already shudderingly predictable.

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • Options
    david-lowdavid-low Registered Users Posts: 752 Major grins
    edited June 27, 2011
    Just curious and a question.

    Is a PhD dissertation legally made available for public viewing? Or it was Ren Ng that make it available in the net.
  • Options
    ziggy53ziggy53 Super Moderators Posts: 23,815 moderator
    edited June 27, 2011
    NeilL wrote: »
    Ziggy & Richard

    On the general level of course I agree! And bring on the future!

    However, when you bite into the nitty gritty of Renng's thesis you might find, I think, as I have done, that his hardware technology achieves no more in providing data for a sensor than a pinhole camera, and is unable to handle actual light, eg on each of those panoptic microlenses there will be all kinds of aberrations which, along with various compromises, will make the data unusable by the sensor to produce a competitive image. Then there is the issue of him applying his algorithms, not to real light data, but to bits in a graphics program, to illustrate his process. Then there is the contradiction about where blur comes from in his images, since there is apparently no blur information coming from the panoptic microlenses. He says that the adjustments will be produced synthetically. Well, we already have software to create dof blur. By contrast, aren't holograms the result of analysis, that is of real data rather than manufactured?

    His images remain to be seen. What they look like is already shudderingly predictable.

    Neil

    Lytro has a FaceBook page. Perhaps you could ask questions there?
    ziggy53
    Moderator of the Cameras and Accessories forums
  • Options
    DoctorItDoctorIt Administrators Posts: 11,951 moderator
    edited June 28, 2011
    david-low wrote: »
    Just curious and a question.

    Is a PhD dissertation legally made available for public viewing? Or it was Ren Ng that make it available in the net.
    Dissertations accepted at any respectable university are published material, no different than a textbook. They certainly aren't as widely printed or circulated, but there is one hardcopy on a shelf at the Stanford Library. These days, the electronic versions are obviously the more "checked out". And yes, those are for public consumption.

    If you're getting at proprietary material, that is solely up to the author. Anything you "publish", be it a dissertation or article in a peer-reviewed journal, is considered disclosed, and therefore no longer proprietary. In general, graduate students working on proprietary technology patent first, then publish. I'm sure Stanford's tech transfer office took care of this.
    Erik
    moderator of: The Flea Market [ guidelines ]


  • Options
    Manfr3dManfr3d Registered Users Posts: 2,008 Major grins
    edited June 29, 2011
    Anyone taking bets on in how many colors Pentax is going to offer this camera :D ?
    “To consult the rules of composition before making a picture is a little like consulting the law of gravitation before going for a walk.”
    ― Edward Weston
  • Options
    DoctorItDoctorIt Administrators Posts: 11,951 moderator
    edited June 29, 2011
    Manfr3d wrote: »
    Anyone taking bets on in how many colors Pentax is going to offer this camera :D ?
    lol3.gif

    That is a fun new trend, eh? I just noticed Canon's in on the game now too!
    Erik
    moderator of: The Flea Market [ guidelines ]


  • Options
    Manfr3dManfr3d Registered Users Posts: 2,008 Major grins
    edited June 29, 2011
    DoctorIt wrote: »
    lol3.gif

    That is a fun new trend, eh? I just noticed Canon's in on the game now too!

    Indeed. While I love Pentax (had an ME Super back in the days) I find their
    marketing absolutely hilarious ... I mean, explain sth. like this to a sober
    person. If only Pentax had a FF dslr and continued some of their legendary FA glass.
    “To consult the rules of composition before making a picture is a little like consulting the law of gravitation before going for a walk.”
    ― Edward Weston
  • Options
    ThatCanonGuyThatCanonGuy Registered Users Posts: 1,778 Major grins
    edited June 29, 2011
    I never thought I'd want a Rebel XS/T3... but the geeky side of me so wants a red Canon DSLR :).

    So many people criticize Pentax, and I agree with them about the Q. It doesn't seem to make much sense right now. BUT... the 645D, K-5, and K-7 are really good cameras. Add to that, the K-x and K-r (similar to Canon's xxD bodies) have pretty solid feature sets, which, along with the color options, help these to sell.
  • Options
    Manfr3dManfr3d Registered Users Posts: 2,008 Major grins
    edited June 29, 2011
    I never thought I'd want a Rebel XS/T3... but the geeky side of me so wants a red Canon DSLR :).

    So many people criticize Pentax, and I agree with them about the Q. It doesn't seem to make much sense right now. BUT... the 645D, K-5, and K-7 are really good cameras. Add to that, the K-x and K-r (similar to Canon's xxD bodies) have pretty solid feature sets, which, along with the color options, help these to sell.

    The 645D seems to be amazing, but I wonder if they will keep the crop
    factor on it and producte a bunch of crop factor 645D lenses instead of
    645 fullframe lenses and a fullframe 645D camera. They did the same
    thing to 35mm format ... all those DFA lenses at the price of a fullframe
    lens. One has to wonder if they abandoned produceing a fullframe 35mm
    camera from the start. Pentax could be so great if they did a few things
    diffrently (with respect to 35mm) imo.
    “To consult the rules of composition before making a picture is a little like consulting the law of gravitation before going for a walk.”
    ― Edward Weston
  • Options
    David_S85David_S85 Administrators Posts: 13,189 moderator
    edited July 26, 2011
    My Smugmug
    "You miss 100% of the shots you don't take" - Wayne Gretzky
  • Options
    NeilLNeilL Registered Users Posts: 4,201 Major grins
    edited July 26, 2011
    David_S85 wrote: »


    How come these examples are working *without* any of the tech and sw on my machine??!!

    Neil
    "Snow. Ice. Slow!" "Half-winter. Half-moon. Half-asleep!"

    http://www.behance.net/brosepix
  • Options
    RichardRichard Administrators, Vanilla Admin Posts: 19,928 moderator
    edited July 26, 2011
    NeilL wrote: »
    How come these examples are working *without* any of the tech and sw on my machine??!!Neil
    Looks to me like it's just a demo using Flash to simulate what the camera does. Seems to just have two, maybe three focal planes. ne_nau.gif
Sign In or Register to comment.