Startup company Lytro promises to redefine photography with computational imaging

I just came across this startup company called Lytro that is trying to remove "the physical constraints of camera and lens systems" by using computational photography processes:

"A Lytro camera can do things that have been considered impossible since the invention of photography. The ability to focus a picture after you take the shot is one striking example, and only the tip of the iceberg. Computational cameras will make photography dramatically simpler, higher performance, and much more fun."

You can play with the provided simulation to get an idea of what they are trying to achieve.


Adobe is working on a similar technology for plenoptic camera with microlens array that lets you select which part of the photo you want to be in focus. There is already a plenoptic camera produced by a German company. It seems that Lytro's solution is only software based and it does not require any special hardware.

This entry was posted in Lytro. Bookmark the permalink. Trackbacks are closed, but you can post a comment.
  • Huggs

    Looks interesting! I can’t wait to see more.

  • Nathan

    By physical restraints they mean pixels, resolution, and ease of use.

    Yes, certainly it’s better to have a camera that is low in resolvable detail and has poor low light performance, because look, you can focus AFTER you take the shot! Why, it’s foolproof! You can’t miss focus!

    Once you can get a good focus, the plenoptic camera has nearly no discernible value for photography. It can be good at creating 3d models.

  • Nobody Special

    Not like ‘traditional phtography’ then is it? I mean it must have the ability to focus? Is it an actual camera or something that resembles one? Would there be interchangeable lenses, etc.

    Perhaps in the ‘future’ the traditions of lens designing for ‘bokeh’ and such will be replaced by an image processor? Will the ‘image’ be completely in focus and then we would slect the focus zones strictly by computer? I’d have to see more of what this all means. I still like using a camera – both film and digital. I’m not sure I would want to give up a ‘traditional’ camera.

  • SSYang

    Computational photography can be done by using a coded aperture. A coded aperture is to put some physical material in front of the aperture to block some light in a predefined manner. Image captured by such a device is kind of blurred due to the lens modification. However, the deconvolution kernel is known in advance so that the image is able to be restored. EDOF cameras on Nokia phones are using similar technology. One of the biggest drawback of this technology is that some light is actually blocked. High ISO performance is likely to be degraded.

    • antonio

      Nokia has done a great and innovate job with their current generation of EDoF cameras, it’s a pity that many people reject outright this technology because they associate it with “cheap fixed focus lens”. The technology (and the concept of a very large depth of field) has some drawbacks and is not suited to everyone or to every situation, but it brings an interesting perspective!

  • http://twitter.com/#!/ZDP189 ZDP-189

    I’ve given concept a bit of thought a couple of years ago. Took it to an optical engineer who was floored. I hadn’t seen it at the time, but it seems to me that the Adobe microlens system would lose a lot in resolution or sensitivity, but makes more sense than my old idea. Yet another idea is to use a slightly smaller imaging sensor area and read the peripheral data to compute a depth map, so the imaging area forms a large depth of field image, and the depth map from the excluded periphery is recorded with the RAW data and can later be used to fake a certain focal point and depth of field.

    • http://www.flickr.com/genotypewriter genotypewriter

      Yet another idea is to use a slightly smaller imaging sensor area and read the peripheral data to compute a depth map, so the imaging area forms a large depth of field image,

      There are some problems with this idea:

      1. Depth recording deep photosites will result in a honeycomb light modifier effect in reverse. Light rays coming from angles will get lost or will bleed in to neighboring photosites through the walls. You’ll get severe vignetting. Even current sensors are too deep because light coming from angles get lost… so a sensor that relies on photosites being deep is more of a trap than a solution.

      2. Photosites will need to have their densities gradually vary with depth so that photons will be captured at an equal rate along the depth (this is irrespective of the honeycomb effect mentioned above). Otherwise the front/most exposed part of the photosite will have more photons traveling through (and being captured) than the back and will result in a higher sensitivity towards the front. So when you’re adjusting focus in post, the noise levels will also vary on the fly.

      3. The amount of focus you can change will be limited as the focal length increases.

      4. To obtain the same level of sensitivity that we get from FF (36×24) sensors today, you’ll need to increase the sensor size drastically because you’re only looking at the photons that fall on a “thin slice” of the sensor. So if you assume there are 50 depth layers (which will make the focus changes very coarse compared to continuous focus changes of Adobe’s multilens demos) that capture photons equally, you’ll need to make the sensor 10×6.6… inches! I’d say forget that… just make a normal sensor that big and its ISO5000 will be like ISO100 on FF cameras now! :)

      I’m sure there are many other issues…

  • asdasd

    once it would catch up (lol)
    sony would come with cameras which make 50fps burst through whole focus range as kind of AF bracketing.

    actually this feature i would like to see in pro cameras…

  • http://twitter.com/#!/ZDP189 ZDP-189

    oh check this out: http://nikonrumors.com/2011/05/09/review-of-the-latest-nikon-patents.aspx

    Nikon’s Japanese patent number 2009-153736 to enhance the apparent depth of field for small sensor cameras. The camera snaps two pictures in quick succession, one with shallow DoF and the other with deep DoF and the camera uses the comparison to to drop the out of focus parts more out of focus using a Gaussian blur.

    This was basically the same as my original idea referred to above, except I used the comparison to create a full depth of field mask that could be used to put the area of focus anywhere I wanted it. The cat’s out of the bag I guess; if not it’s in the public domain as of this post!

    • http://www.flickr.com/genotypewriter genotypewriter

      Not sure what the vast benefits are in using two images for this purpose. Just like we adjust contrast curves on a single image, it’s possible to do a “blur curve” adjustment using a spatial information content map.

      Plus this patent is for small sensor cameras… and I don’t expect to see many differences in DOF between images except when shooting at very close distances. Then this will lead to focus shifts and the two images will not match perfectly.

      And it’d only work with perfect lenses. All other lenses have softness towards the corners that reduce when stopping down. So increase of sharpness due to a reduction of aberrations but not necessarily due to the increase of DOF is going to be (mis)interpreted by the algorithm as depth information too.

      Also, why just stop at two? Wouldn’t three or four be better if they can be done at the same speed?

      Isn’t it fun filing for patents? :)

      GTW

  • Baris

    The blur looks quite fake to me. Also note the gaussian type softness in transitional areas.

  • Global

    They need a slogan:

    “f/8 is great, but f/22 will do!”

    =P

  • holkle

    How about a second (smaller) image sensor (may be instead of the autofocus sensor), that is slightly out of focus. The computation could be based on the differences between the output of the two sensors.

    • http://twitter.com/#!/ZDP189 ZDP-189

      You mean sample a crop? I wouldn’t work, unless you had two lenses. Hey, we’ve just invented the twin lens non-reflex. LOL

      • Global

        No… I think she/he’s talking about a sensor behind the sensor — (or in front of, but I wouldn’t see the point of that), which would be used to compute nature blurring by recording an actually blurred image.

        Algorithms could merge the actual blur with the actual sharp image with less artificiality, because it will be based on an actually blurred image.

        Not my idea, just interpreting what I think I’m hearing (unless I am completely wrong in my interpretation, in which case I guess it would be my idea! :-P).

  • Pikemann Urge

    It seems to me that if information is lost, you can’t retrieve it. This smacks of CSI ‘digital enhance’ where detail is miraculously created from an image of limited resolution. Obviously I’m missing something, though.

    Maybe a better idea would be to have large DOF on a 3D camera – and DOF would be fine tuned depending on the parallax differences between the left and right images.

    • http://twitter.com/#!/ZDP189 ZDP-189

      The only way to do it it to start with all the data and decide what to blur. It’s a nice gimmick, but at the end of the day you have a small sensor and small aperture creating the image.

      Multi-sensor arrays have several advantages (1) they can average out noise and dust; (2) they can do really cool stuff like multi-axis 3D and (3) 3D modelling. As you were saying.

      • Global

        Exactly, people seem to think this is “creating sharp focus” where none exists — but that’s not the case at all. Its taking an extremely in-focus image (f/11-f/32 in Fullframe terms) and then selecting where to blur to pretend you were at f/1.4, etc.

        The reason you need to do this (with non Full Frame cameras) is because the smaller the sensor — the more the image appears in focus. Furthermore, with ISO technology increasing, and weights decreasing, manufacturers will want smaller, lighter (but not necessarily brighter) lenses. So how do achieve the look of f/1.8 if you’ve only got what appears to be an f/4 lens at best?

        You computationally “focus.” Which is really de-focusing the other parts and perhaps sharpening up the center of its determined focal point.

        • http://www.flickr.com/genotypewriter genotypewriter

          hehe and the next thing we know is these guys will re-invent smile detection :D

  • Peter

    YES! go for it nerds. Kill everything that is fun about photography. Soon you will be able to send out your robot drone that will photograph all angles, all focal lengths, with extreme dynamic range at 10,000 fps. Then with some post processing magic, you can be a true master of photography. Don’t worry about focus, don’t worry about exposure, don’t even worry about composition or even the moment. It will all be delivered to you, while you sit in the comfort of your own home. You won’t miss a thing.

  • http://www.jeremyrata.com Snaprat

    Peter, I could not agree with you more! Armchair photography for the hard of talent on it’s way to a woulda/coulda near you! I give up. I’m going back to film and obscurity (which by the way I never emerged from!!)

  • Back to top




// B&H PopView code