What is a Plenoptic camera?

Plenoptic technology is developed by Adobe and it lets you select which part of the image you want to be in focus. A Plenoptic camera contains an array of microlenses between the front lens and the sensor and it captures a series of small images from different viewpoints that can later be extracted and manipulated by special software in order to reach a certain depth of field during postprocessing.

Todor Georgiev is one of the scientist from Adobe that worked on that project and on his website I even found a picture of the the first plenoptic camera on the market produced by the German company Raytrix:

Related links:

This entry was posted in Adobe, Future of Photo Equipment. Bookmark the permalink. Trackbacks are closed, but you can post a comment.
  • R.I.P. photography as we understand it at the moment…

  • woble

    So awesome! But it could still be a decade or so before it reaches mainstream.

  • Time to sell my Leica glass.. NOT! 😉

  • Ruslan

    It looks like implementation of 4 years old research


    I’m not sure it was “developed by Adobe”.

  • muncher
  • John Bowen

    What a monumental waste of time.

    That’s not to say I don’t expect it to sell. If running a minilab has taught me anything, it’s that the hobby (and the profession, sadly) of photography is full of idiots who will buy the latest shiny gadget, whether it will do them a bit of good or not.

  • What will happen with bokeh? Does it mean the introduction of an artificial bokeh as many things in digital photography (grain, filters, etc)?

  • CL

    very cool!

  • kururu

    Now I know why those bugs have many micro eyes
    Bug Eyes technology implemented, bravo

  • Eugene

    It’s cool but I think it will just make photographers lazy.

    • Luddites

      For printed or static work (i.e. 90% of photo viewing at the moment) maybe, but for digital viewing where the viewer gets to choose the focus I’d say the opposite – you need to consider more of the composition and can’t hide it all in a narrow DoF. Plus you should still be showing the ‘home’ of the focus/DoF to give the initial view.

  • Camaman

    Wouldn’t taking a number of pics at fast fps at different aperture and focus distance produce the similar effect.
    Like 30 pics at say 100fps? And make a dynamic compsite..?

    • Shkacas

      I would like to see that in action… 🙂

      • oblet

        Surely motion photography using this technique could reduce eye strain by changing focus in response to the user’s own focus – multiple viewers would require polarised screening systems to cope, but then to viewers on the same screen could view a different focus.

        It would mean the director loses some creative control, so they viewer could miss something, but rather than lazy creativity, I think that adds a new challenge for the director – it becomes a little more like stage direction where the viewer chooses where to look, but the direction/perfornabce leads them to follow the imptorant action.

        All in all; awesomeness.

  • amien

    it simply means that small camcorders will be able to provide extreme shallow depth of field & better dynamic range as well.
    Then, applications in real cinematography post prod can be monstruous. (no need of 7 cams for the matrix effect for ex)
    But yes, it take out all the challenge & the art of thinking about a nice shot before shooting it. Lazy photography or cinematography = very often, boring results !

  • ZoetMB

    That’s quite amazing. Old technology, new technology, what’s the difference? What matters is that with the processing power now available on newer computers, a practical implementation is becoming possible.

    I’m sure there are drawbacks, like the optical characteristics of each of the “bug eyes”, their small size, etc. But it seems to me that this opens a whole new world of possibilities. Although I thought that about holography also, and that has pretty much gone nowhere.

  • Back to top