The present application relates to methods and devices involving image processing. In particular, some embodiments relate to enhancing a three-dimensional appearance of a two-dimensional image.
With the development of image sensors, digital photography, i.e. a digital capturing of images, has become more and more popular and has at least in the consumer sector largely replaced analog photography using films. The possibility of capturing digital images is not only provided by dedicated camera equipment, but digital cameras are integrated in many mobile devices, for example mobile phones, laptop computers, tablet PC's or mobile gaming devices. Digital images give vise to the possibility of digital image processing, i.e. modifying captured images. Image processing techniques commonly include e.g. white balance adjustment or sharpening of images.
Furthermore, in recent years three-dimensional imaging has become more and more popular. For three-dimensional images, two images of the same scene with different viewing angles are captured, and then the “three-dimensional picture” may be viewed with special viewing devices, for example headsets involving polarizers or shutters. However, still most viewing devices are only adapted for displaying two-dimensional images, e.g. simple display screens.
It would therefore be desirable to also enhance a three-dimensional appearance of two-dimensional images, or, in other words, to provide possibilities for adding or enhancing a three-dimensional impression also in conventional two-dimensional images.
According to an embodiment, a method as defined in claim 1 is provided. According to a further embodiment, a device as defined in claim 11 is provided. The dependent claims define further embodiments.
According to an embodiment, a method is provided, comprising:
According to an embodiment, modifying colors of the at least one intermediate image may comprise reducing colors of portions of the at least one intermediate image further away from a viewer relative to the colors of portions of the at least one intermediate image closer to a viewer.
According to an embodiment, modifying the colors may comprise enhancing colors of portions of the at least one intermediate image closer to a viewer relative to colors of portions of the at least one intermediate image farther away from a viewer.
According to an embodiment, providing depth information of the scene may comprise scanning the scene with a depth scanner.
According to an embodiment, providing at least one intermediate image of the scene and providing depth information of the scene may comprise capturing at least two intermediate images of the scene with different focus distances, the depth information comprising the focus distances.
According to an embodiment, providing the final image may comprise combining the at least two intermediate images with modified colors.
According to an embodiment, combining the at least two intermediate images may comprise focus stacking.
According to an embodiment, modifying the colors may comprise reducing the colors of an intermediate image of the at least two intermediate images with a greater focus distance relative to the colors of an intermediate image of the at least two intermediate images with a smaller focus distance.
According to an embodiment, modifying the colors may comprise enhancing the colors of an intermediate image of the at least two intermediate images with a greater focus distance relative to the colors of an intermediate image of the at least two intermediate images with a smaller focus distance.
According to an embodiment, capturing the at least two intermediate images may comprise capturing the at least two intermediate images with at least two different cameras (22, 23).
According to a further embodiment, a device is provided, comprising:
According to an embodiment, the device may further comprise a depth scanner configured to provide said depth information.
According to an embodiment, the device may be configured to capture at least two intermediate images of the scene with said camera with different focus distances, the depth information comprising the focus distances.
According to an embodiment, the device may be selected from the group consisting of a mobile phone, a digital camera, a laptop computer, a tablet PC, and a gaming device.
The device, in particular the processor unit thereof, may be configured to execute any of the above-explained methods, for example by programming the processor unit accordingly.
The above-described embodiments may be combined with each other unless noted otherwise.
In some embodiments, through modifying the colors a three-dimensional appearance of the final image may be enhanced.
Non-limiting embodiments of the invention will be described with reference to the attached drawings, wherein:
In the following, embodiments of the present invention will be described with reference to the attached drawings. It should be noted that these embodiments are merely given to illustrate possibilities for implementing the present invention and are not to be construed as limiting. Features of different embodiments described may be combined with each other unless specifically noted otherwise. On the other hand, describing an embodiment with a plurality of features is not to be construed as indicating that all those features are necessary for practicing the invention, as other embodiments may comprise less features or alternative features.
In general, embodiments described to the following relate to capturing an image. Capturing images may comprise capturing still images, capturing movies (which amount to a quick succession of images), or both.
Usually, for capturing images digital cameras are used, although images may also be obtained from other sources like film scanning. Digital cameras, as known in the art, comprise some optics, in particular comprising lenses, for focussing light on an image sensor, which image sensor then captures the image. Image sensors may comprise CCD (Charge Coupled Device)-Sensors or CMOS-Sensors, both of which may have a color filter placed in front of the sensor to be able to capture colored images, or may also comprise image sensors having multiple layers for capturing different colors. The optic provided may be a fixed focus or a variable focus optic. Fixed focus optics have a fixed focus plane, which corresponds to the plane in an image which appears “sharpest” on the image, while with variable focus optics the focus may be adjusted between different distances. The distance between the camera and the focus plane is referred to a focus distance in the following. It should be noted that these terms are not to be confused with the term focal length or focal plane, which also depends on the optic used and which determines the angle of view of the optic and therefore of the camera. The optic may have a fixed focal length, for example be a so called prime lens, or may also have a variable focal length, i.e. may comprise a so called zoom lens.
Embodiments described in the following relate to modifying colors of images. This is construed not to cover only modifying colors of colored images, but is construed also to cover the modifying of colors of monochrome images, for example the greyscales of black and white images.
Turning now to the Figures, in
In the method of
At 11, depth information for the scene is provided. For example, information as regards distances between a viewer and certain portions of the scene may be provided. In some, embodiments, as also will be explained further below a depth information may be obtained by a depth analyzing device, for example an infrared scanning device. In other embodiments where two or more images are captured with different focus distances, the depth information may comprise or consist of the different focus distances, the focus distances indicating the distances between a viewer and a focus plane of the respective intermediate image.
As can be seen from the example where the focus distance is at least part of the depth information, the actions at 10 and 11 may be performed simultaneously, or consecutively in any desired order. For example, the depth information may be provided before or after providing the at least one intermediate image.
At 12, colors of the at least one intermediate image are modified based on the depth information. For example, in case the at least one intermediate image comprises a single image, portions of the image which according to the depth information are farther away from a viewer may have their color reduced, for example by decreasing a color intensity or a brightness, and/or portions of the image closer to a viewer may have their color enhanced, for example by enhancing the color intensity and/or enhancing the brightness. Through such a modification, in some embodiments a three-dimensional appearance may be created, as it corresponds to natural seeing to see things farther away with less vivid colors.
In case the at least one intermediate image comprises a plurality of images, intermediate images with a greater focus distance may have their color reduced, and/or intermediate images with a smaller focus distance may have their color enhanced. The above approaches may also be combined for example in cases where more than one intermediate image of a scene is taken and the depth information comprises both the focus distances and depth information provided by a further source like an IR scanner.
Finally, at 13 a final image is provided based on the at least one intermediate image with modified colors. In case only one intermediate image is used, the final image may be identical to the at least one intermediate image with modified colors, or some image processing may be applied, for example a sharpening algorithm. In case the at least one intermediate image comprises two or more intermediate images captured at different focus distances, the final image may be based on a combination of the intermediate images. In particular, in some embodiments, the intermediate images may be combined with a technique known as focus stacking, which is a conventional technique for combining images taken at different focus distances and which is conventionally used to provide a resulting image with a greater depth of field. Also in this case, when combining the plurality of intermediate images with the colors modified as explained above, i.e. colors of images with greater focus distances reduced compared to the colors of images with smaller focus distances, a three-dimensional appearance of the final image may be enhanced. It should be noted that also in this case further conventional image processing techniques may be applied like sharpening in addition to the combination via focus stacking.
Embodiments of devices in which the method of
In
The device of
As a simple example of a scene, in
It should be noted that the focus plan 29, 27 and the focus distances 210, 28 shown in
Images captured by first camera 22 and second camera 23 are examples for intermediate images of the embodiment of
First camera 22 and second camera 23 are coupled with a processor unit 21. Processor unit 21 may comprise one or more microprocessors like general purpose microprocessors or digital signal processors configured, for example programmed, to process images captured by first camera 22 and second camera 23. Processor unit 21 is also coupled to a storage 24, for example a random access memory (RAM), a flash memory, a solid state disk, and/or a rewritable optical medium and my store images captured by first camera 22 and second camera 23 in storage 24.
Processor unit 21 in the embodiment of
It should be noted that the device 20 shown in
A further device according to an embodiment is shown in
This may be achieved by reducing the colors of the portions farther away from the viewer enhancing the colors of the portions closer to the viewer or both. Different distances or different zones of distances may be assigned different color enhancements/reductions. The thus modified image, possibly together with the original image captured, may be stored in storage 34.
It should be noted that mobile devices 20 and 30 of
As already explained above, a plurality of variations and combinations are available with the above-described embodiments, which therefore are not to be construed as limiting the scope of the present application in any way.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP11/02674 | 5/30/2011 | WO | 00 | 5/25/2012 |