The field of the disclosure relates to light-field imaging. More particularly, the disclosure pertains to technologies for correcting aberration induced by the main lens of a camera.
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Conventional image capture devices render a 3 (three)-dimensional scene onto a two-dimensional sensor. During operation, a conventional capture device captures a two-dimensional (2D) image representing an amount of light that reaches each point on a sensor (or photo-detector) within the device. However, this 2D image contains no information about the directional distribution of the light rays that reach the sensor (may be referred to as the light-field). Depth, for example, is lost during the acquisition. Thus, a conventional capture device does not store most of the information about the light distribution from the scene.
Light-field capture devices (also referred to as “light-field data acquisition devices”) have been designed to measure a four-dimensional (4D) light-field of the scene by capturing the light from different viewpoints of that scene. Thus, by measuring the amount of light traveling along each beam of light that intersects the sensor, these devices can capture additional optical information (information about the directional distribution of the bundle of light rays) for providing new imaging applications by post-processing. The information acquired/obtained by a light-field capture device is referred to as the light-field data. Light-field capture devices are defined herein as any devices that are capable of capturing light-field data.
Among the several existing groups of light-field capture devices, the “plenoptic device” or “plenoptic camera” embodies a micro-lens array positioned in the image focal field of a main lens, and before a photo-sensor on which one micro-image per micro-lens is projected. Plenoptic cameras are divided up in two types depending on the distance d between the micro-lens array and the sensor. Regarding the “type 1 plenoptic cameras”, this distance d is equal to the micro-lenses focal length f (as presented in the article “Light-field photography with a hand-held plenoptic camera” by R. Ng et al., CSTR, 2(11), 2005). Regarding the “type 2 plenoptic cameras”, this distance d differs from the micro-lenses focal length f (as presented in the article “The focused plenoptic camera” by A. Lumsdaine and T. Georgiev, ICCP, 2009). For both type 1 and type 2 plenoptic cameras, the area of the photo-sensor under each micro-lens is referred to as a microimage. For type 1 plenoptic cameras, each microimage depicts a certain area of the captured scene and each pixel of this microimage depicts this certain area from the point of view of a certain sub-aperture location on the main lens exit pupil. For type 2 plenoptic cameras, adjacent microimages may partially overlap. One pixel located within such overlaying portions may therefore capture light rays refracted at different sub-aperture locations on the main lens exit pupil.
Light-field data processing comprises notably, but is not limited to, generating refocused images of a scene, generating perspective views of a scene, generating depth maps of a scene, generating extended depth of field (EDOF) images, generating stereoscopic images, and/or any combination of these.
It has been observed that light-field data are affected by the aberration induced by plenoptic camera main lens. Such light aberration phenomenon is defined as a defect in the image of an object viewed through an optical system (e.g. the main lens of a plenoptic camera) which prevents to bring into focus all the light rays depicting a same object dot.
In order to compensate the undesirable effects of the aberration phenomenon, it is well known from the prior art to introduce additional lenses within the optical system. Such additional lenses are designed and placed within the optical system so as to correct the aberration phenomenon generated by the main lens. Nevertheless, the implementation of these solutions has the drawback of increasing significantly the complexity, weight, and thickness of the optical system.
It would hence be desirable to provide an apparatus and a method that show improvements over the background art.
Notably, it would be desirable to provide an apparatus and a method, which would allow correcting the light aberration induced by the main lens of a plenoptic device, while limiting its thickness, weight and complexity.
References in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
In one particular embodiment of the technique, a method for correcting aberration affecting light-field data acquired by a sensor of a plenoptic device is disclosed. The method comprises:
In the following description, the expression “aberration” refers to a defect in the image of an object dot viewed through an optical system (e.g. the main lens of a plenoptic camera) which prevents to bring into focus all the light rays depicting a same object dot. As a consequence, these light rays converge at different focalization distances from the main lens to form images on different focalization planes, or convergence planes, of the plenoptic device sensor. Depending on the nature of the aberration (chromatic and/or geometric) induced by the optical system, light rays focus on different convergence planes when hitting the sensor, as a function of at least one physical and/or geometrical property of the light-field. Such property is therefore considered as a discrimination criterion associated with the aberration induced by the optical system of the plenoptic device, which translates into the light-field data acquired by its sensor. The distance between two consecutive views of a same object dot is referred to under the term “disparity”. Depending on the nature of the aberration (chromatic and/or geometric), this disparity varies as a function of the physical and/or geometrical properties of the light rays captured by the sensor. This “disparity variation”, referred to under the terms “disparity dispersion”, translates into the acquired light-field data the intensity of the aberration induced by the optical system of the plenoptic device.
When implementing the method, the subsets of light-field data are determined as a function of a discrimination criterion associated with the aberration. At least some of these subsets of light-field data are then projected into a two-dimensional picture, also referred to as “sub-picture”, which features a reduced aberration. Such a projection is performed as a function of both the disparity dispersion of the subsets of light-field data and a spatial information on the focalization plane of a corrected picture to be obtained, so that the planes on which the subsets of light-field data are respectively projected and the focalization plane of the corrected picture are as close as possible, and preferentially combined with each other.
By taking advantage of the intrinsic properties of light-field data acquired by a plenoptic device, this post-capture method relies on a new and inventive approach that allows correcting aberration affecting light-field data after their acquisition and without requiring the implementation, within this plenoptic device, of an aberration-free optical system. Consequently, the thickness, weight and complexity of this optical system can be significantly reduced without impacting the quality of the rendered image obtained after correcting and refocusing the light-field data.
In one embodiment, the aberration induced by the main lens of the plenoptic device is a chromatic aberration, and the subsets of light-field data are determined as a function of the wavelength of the light acquired by the sensor.
According to this embodiment, the light rays getting through the main lens are refracted differently as a function of their respective wavelength. Thus, the light rays emitted from a same object point of a scene hit the sensor of the plenoptic device at different localizations, due to the chromatic aberration induced by the main lens. The wavelength of these light rays is therefore the distinctive physical property that is considered when determining the different subsets of the light field data. In the following description, the expression “main lens” refers to an optical system, which receives light from a scene to be captured in an object field of the optical system, and renders the light through the image field of the optical system. In one embodiment of the disclosure, this main lens only includes a single lens. In another embodiment of the disclosure, the main lens comprises a set of lenses mounted one after the other to refract the light of the scene to be captured in the image field.
A method according to this embodiment allows correcting chromatic aberration affecting light-field data.
In one embodiment, the aberration induced by the main lens of the plenoptic device is astigmatism, and the subsets of light-field data are determined as a function of the radial direction in the sensor plane along which light is captured.
According to this embodiment, the light rays getting through the main lens are refracted differently as a function of their radial direction, which is therefore the distinctive geometrical property that is considered when determining the different subsets of the light field data. A method according to this embodiment allows correcting astigmatism affecting light-field data.
In one embodiment, the method comprises determining the disparity dispersion resulting from the light aberration from calibration data of the plenoptic device.
Such calibration data are usually more accurate and specific to a certain camera than datasheets reporting the results of tests run by the manufacturer after assembling the camera or any other camera of the same model.
In one embodiment, the method comprises determining the disparity dispersion resulting from the aberration by analyzing a calibration picture.
By this way, the method allows determining autonomously the aberration affecting the light-field data. Thus, there is no need to provide information about the disparity dispersion other than the one included into the light-field data themselves, and no calibration data is needed.
In one embodiment, light-field data are first focused and analyzed taking the green color as a reference.
Green light has the advantage of featuring a high luminance, while being the color to which human eye is the most sensitive. Alternatively, light-field data may also be focused taking another color as a reference.
In one embodiment, the wavelength of the light acquired by the sensor pertains to a color of a Bayer filter.
A method according to this embodiment is adapted to process light-field data acquired from a plenoptic camera embodying a Bayer filter.
In one embodiment, this method comprises determining 3 (three) subsets of light-field data, each of them corresponding to the captured light rays featuring the wavelength of one of the Bayer filter colors (blue, green, red).
It is therefore possible to rebuild all the colors of the visible spectrum when rendering the corrected image.
The method may also be implemented on light-field data acquired from a plenoptic device embodying another type of Color Filter Array, or whose sensor only detects one wavelength. In such embodiments, more or less subsets of light-field data may be determined, as a function of the discrimination ability of the plenoptic device sensor.
In one embodiment of the method for correcting aberration, the step projecting is done for all of the subsets of light field data.
In one particular embodiment of the technique, an apparatus for correcting aberration affecting light-field data acquired by the sensor of a plenoptic device is disclosed. Such an apparatus comprises a processor configured for:
One skilled person will understand that the advantages mentioned in relation with the method described here below also apply to an apparatus that comprises a processor configured for implementing such a method.
The disclosure also pertains to a method of rendering a picture obtained from light-field data acquired by a sensor of a plenoptic device, said method comprising:
In one embodiment of the method of rendering, the step projecting is done for all of the subsets of light field data.
The disclosure also pertains to a plenoptic device comprising a sensor for acquiring light-field data and a main lens inducing aberration on said light-field data, wherein it comprises a processor configured for:
Such a plenoptic device is therefore adapted to acquire light-field data and process them in order to display a refocused picture free of aberration. Because the method for correcting aberration is implemented after the acquisition of the light-field data, such a plenoptic camera does not need to implement a main lens adapted to correct independently the aberration. Thus, the thickness, weight and complexity of the plenoptic device main lens can be significantly reduced without impacting the quality of the rendered image obtained after correcting and refocusing the light-field data.
In one embodiment of the plenoptic device, the step projecting is done for all of the subsets of light field data.
In one particular embodiment of the technique, the present disclosure pertains a computer program product downloadable from a communication network and/or recorded on a medium readable by a computer and/or executable by a processor. Such a computer program product comprises program code instructions for implementing at least one of the methods described here below.
In one particular embodiment of the technique, the present disclosure pertains a non-transitory computer-readable carrier medium comprising a computer program product recorded thereon and capable of being run by a processor, including program code instructions for implementing at least one of the methods described here below.
While not explicitly described, the present embodiments may be employed in any combination or sub-combination.
The present disclosure can be better understood with reference to the following description and drawings, given by way of example and not limiting the scope of protection, and in which:
The components in the Figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure.
General concepts and specific details of certain embodiments of the disclosure are set forth in the following description and in
The invention relies on a new and inventive approach that takes advantage of the intrinsic properties of light-field data acquired by a plenoptic device to allow correcting aberration affecting these light-field data after their acquisition and without requiring the implementation, within this plenoptic device, of an aberration-free optical system. As a consequence, the thickness, weight and complexity of this optical system can be significantly reduced without impacting the quality of a rendered image obtained after correcting and refocusing the light-field.
In one embodiment, some spacers are located between the microlens array 3, around each lens, and the sensor 4, to prevent light from one lens to overlap with the light of other lenses at the sensor side.
The image captured on the sensor 4 is made of a collection of 2D small images arranged within a 2D image. Each small image is produced by the microlens (i, j) from the microlens array 3.
This formulation assumes that the microlens array 3 is arranged following a square lattice. However, the present disclosure is not limited to this lattice and applies equally to hexagonal lattice or even non-regular lattices.
Where r is the number of consecutive micro-lens images in one dimension. An object is visible in r2 micro-lens images. Depending on the shape of the micro-lens image, some of the r2 views of the object might be invisible.
The distances p and w introduced in the previous sub-section are given in unit of pixel. They are converted into physical unit distance (meters) respectively P and W by multiplying by the pixel size δ:
W=δw and P=δp
These distances depend on the light-field camera features.
The disparity W varies with the distance z of the object or scene point from the main lens (object depth). Mathematically from the thin lens equation:
And Thales law:
From the two preceding equations, a relationship between the disparity W and depth z of the object in the scene may be deduced as follows:
The relation between the disparity W of the corresponding views and the depth z of the object in the scene is determined from geometrical considerations and does not assume that the micro-lens images are in focus.
The disparity of an object which is observed in focus is given as: Wfocus=ϕd/f
In practice micro-lens images may be tuned to be in focus by adjusting parameters d and D according to the thin lens equation:
A micro lens image observed on a photo sensor of an object located at a distance z from the main lens appears in focus as long as the circle of confusion is smaller than the pixel size. In practice the range [Zm, ZM] of distances z, which enables micro images to be observed in focus is large and may be optimized depending on the focal length f, the apertures of the mains lens and the microlenses and the distances D and d.
From the Thales law P may be derived
The ratio e defines the enlargement between the micro-lens pitch and the pitch of the micro-lens images projected on the photosensor. This ratio is typically close to 1 since d is negligible compared to D.
A major property of the light-field camera is the possibility to compute 2D re-focused images where the re-focalization distance is freely adjustable. The 4D light-field data is projected into a 2D image by just shifting and zooming on micro-lens images and then summing them into a 2D image. The amount of shift controls the re-focalization distance. The projection of the 4D light-field pixel (x, y, i, j) into the re-focused 2D image coordinate (X, Y) is defined by:
Where s controls the size of the 2D re-focused image, and g controls the focalization distance of the re-focused image. The equation can be written as follow:
The parameter g can be expressed as a function of p and w. g is the zoom that must be performed on the micro-lens images, using their centers as reference, such that the various zoomed views of the same objects gets superposed. One obtains:
The equation becomes:
Image refocusing consists in projecting the light-field pixels L(x, y, i, j) recorded by the sensor into a 2D refocused image of coordinate (X, Y). The projection is performed according to the equation (2). The value of the light-field pixel L(x, y, i, j) is added on the refocused image at coordinate (X, Y). If the projected coordinate is non-integer, the pixel is added using interpolation. To record the number of pixels projected into the refocus image, a weight-map image having the same size as the refocus image is created. This image is preliminary set to 0. For each light-field pixel projected on the refocused image, the value of 1.0 is added to the weight-map at the coordinate (X, Y). If interpolation is used, the same interpolation kernel is used for both the refocused and the weight-map images. After all the light-field pixels are projected, the refocused image is divided pixel per pixel by the weight-map image. This normalization step, ensures brightness consistency of the normalized refocused image.
As illustrated by
When studying the impact of chromatic aberration in a plenoptic camera system as illustrated by
As illustrated by
When studying the impact of astigmatism in a plenoptic camera system, as illustrated by
After an initialization step, a plurality of data, comprising at least the following data is inputted (step INPUT (S1)):
This step INPUT (S1) may be conducted either automatically or by an operator.
The light-field data (LF) may be inputted in any readable format. In a similar way, spatial information about the focalization plane of the picture (Cor_Pict) to be obtained from the light-field data (LF) may be expressed in any spatial referential system. In one embodiment, this focalization plane is determined manually, or semi-manually, following the selection by an operator of one or several objects of interest, within the scene depicted by the light-field. In another embodiment, the focalization plane is determined automatically following the detection within the inputted light-field of objects of interest.
In one embodiment of the invention, the disparity dispersion (D(W)) of the plenoptic device main lens 2 is inputted under the form of datasheets listing the variation of disparity W as a function of the wavelength of the light ray captured by the sensor 4, or providing any other information from which such variations is deductible. In one embodiment, as illustrated by
In another embodiment, the disparity dispersion (D(W)) is determined based on the analysis of a calibration picture, as illustrated by
According to equation (1), for a given focalization distance z, the disparity w is assumed constant. In equation (2) the term w is multiplied linearly by
Since the disparity w is not a linear function considering the chromatic aberrations, the term w from equation (2) is replaced by wc(i, j) which indicates the shift in pixels (2D coordinates) associated with micro-image (i, j) where the number of colors ranges from one to a number Nc and where c is the color index captured by the sensor (for a Bayer color pattern made of Red, Green and Blue colors, the number Nc being equal to three). The disparity wc(i, j) is the 2D pixel shift to perform to match a detail observed on micro-lens (0,0) for a reference color versus the same detail observed at micro-lens (i, j) for color c. If no chromatic aberrations are considered, then:
Considering the disparity wc(i, j), equation (1) becomes:
For a given focalization distance z one can estimate an average waverage using a calibration image made of a single dark dot within a white board. That dot is observed on certain consecutive micro-images. The distance between two observations of the dark dots between two consecutive horizontal micro-images gives us an indication of the average waverage.
For a given focalization distance z one can estimate the disparity wc(i, j) using a calibration image, as for instance a chessboard located at a focalization distance z from the camera, illustrated by
d
i
=w
c(i+1,j)−wref(i,j)
Following the step INPUT (S1), a plurality of subsets of light-field data (sub_LF) is determined (step S2), as a function of the wavelength of the captured light rays. In this embodiment, a Bayer Color Filter Array is mounted on the sensor 4 of the plenoptic device 1 used to acquire the processed light-field data (LF). Therefore, 3 (three) subsets of light-field data (sub_LF) are determined (S2), each of them corresponding to the captured light rays featuring the wavelength of one of the Bayer filter colors (blue, green, red). The wavelength of these light rays is therefore the distinctive physical property considered when determining the different subsets of the light field data. In another embodiment, the method may also be implemented on light-field data acquired from a plenoptic device embodying another type of Color Filter Array, or whose sensor only detects one wavelength. In such embodiments, more or less subsets of light-field data (sub_LF) may be determined (S2), as a function of the detection ability of the plenoptic device sensor 4.
At the step Projecting (S3), at least some of the determined subsets of light-field data (sub_LF) are selected. Then, each of these selected subsets of light-field data (sub_LF) is projected into a respective two-dimensional sub-picture (sub_Pict) with corrected chromatic aberrations, using the equation (2) described here before in paragraph 5.1.4. In particular, when running this equation (2), the focalization distance of the sub-picture, controlled by g, is determined as a function of the focalization distance of the corrected picture (Cor_Pict) to be obtained, so that these two focalization distances are as close as possible, and preferentially equal to each other. In parallel, the disparity w to apply in the equation (2) is determined as a function the disparity dispersion D(W) of the subsets of light-field data (sub_LF), as described in paragraph 5.2.1.
Following the projection step (S3), the 3 (three) sub-pictures (sub_Pict) are summed up (S4) (or added) and therefore a colored two-dimensional picture (Cor_Pict) is obtained. In a preferential embodiment of the invention, each of the two-dimensional sub-picture (sub_Pict) is included into the focalization plane of the two-dimensional picture (Cor_Pict) to be obtained. Thus, this colored picture (Cor_Pict) is free of chromatic aberration since all the light-rays of the light-field converge into the focalization plane of the corrected picture (Cor_Pict) whatever their wavelength.
In another embodiment of the invention, at least one of the sub-picture (sub_Pict) is misaligned with the focalization plane of the corrected picture (Cor_Pict). As a consequence, a chromatic aberration of the color depicted by said sub-picture (sub_Pict) might remain on the corrected picture (Cor_Pict), the intensity of the aberration decreasing when the sub-picture (sub_Pict) is getting closer to the focalization plane of the corrected picture (Cor_Pict).
In another embodiment of the disclosure, the plenoptic camera does not comprise a color filter array (such as a Bayer filter, and so on) that induces the use of a demosaicing method for generating monochromatic subsets of light field data. For example, in one embodiment, the plenoptic camera uses an array of Foveon X3 sensors (such kind of sensors are described in the article entitled “Comparison of color demosaicing methods” by Olivier Losson et al.), or other sensors that are enabled to record red, green, and blue light at each point in an image during a single exposure. In such case, no demosaicing method for generating monochromatic subsets of light field data is implemented.
The method for correcting chromatic aberration, as described here below in paragraph 5.2, may be applied to the correction of geometrical aberration and especially astigmatism, with the only differences being that:
In one embodiment, 16 (sixteen) subsets of light-fields (sub_LF) are determined (S2), based on the assumption that only the microimages 5 located in a close neighborhood of the central microimage are affected by astigmatism.
Nevertheless, in other embodiments, less or more subsets of light-fields (for example: 8 (eight), or 32 (thirty-two)) may be projected depending on the desired accuracy of astigmatism correction and on the available calculation resources.
In one embodiment, the method comprises interpolating the subsets of light-fields (sub_LF) in order to re-build the missing data.
In one embodiment, the method comprises rendering (S5) the corrected picture (Cor_Pict).
The processor 7 controls operations of the apparatus 6. The storage unit 8 stores at least one program to be executed by the processor 7, and various data, including light-field data, parameters used by computations performed by the processor 7, intermediate data of computations performed by the processor 7, and so on. The processor 7 may be formed by any known and suitable hardware, or software, or by a combination of hardware and software. For example, the processor 7 may be formed by dedicated hardware such as a processing circuit, or by a programmable processing unit such as a CPU (Central Processing Unit) that executes a program stored in a memory thereof.
The storage unit 8 may be formed by any suitable storage or means capable of storing the program, data, or the like in a computer-readable manner. Examples of the storage unit 8 include non-transitory computer-readable storage media such as semiconductor memory devices, and magnetic, optical, or magneto-optical recording media loaded into a read and write unit. The program causes the processor 7 to perform a process for correcting aberration affecting light-field data, according to an embodiment of the present disclosure as described above with reference to
The interface unit 9 provides an interface between the apparatus 6 and an external apparatus. The interface unit 9 may be in communication with the external apparatus via cable or wireless communication. In this embodiment, the external apparatus may be a plenoptic camera 1. In this case, light-field data can be input from the plenoptic camera 1 to the apparatus 6 through the interface unit 9, and then stored in the storage unit 8.
The apparatus 6 and the plenoptic camera 1 may communicate with each other via cable or wireless communication.
Alternatively, the apparatus 6 may be integrated into a plenoptic camera 1 comprising a display for displaying the corrected picture (Cor_Pict).
Although only one processor 7 is shown on
These modules may also be embodied in several processors 9 communicating and co-operating with each other.
As will be appreciated by one skilled in the art, aspects of the present principles can be embodied as a system, method or computer readable medium. Accordingly, aspects of the present principles can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, and so forth), or an embodiment combining software and hardware aspects.
When the present principles are implemented by one or several hardware components, it can be noted that a hardware component comprises a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and/or a floating-point unit, and/or a network processor, and/or an audio processor, and/or a multi-core processor. Moreover, the hardware component can also comprise a baseband processor (comprising for example memory units, and a firmware) and/or radio electronic circuits (that can comprise antennas), which receive or transmit radio signals. In one embodiment, the hardware component is compliant with one or more standards such as ISO/IEC 18092/ECMA-340, ISO/IEC 21481/ECMA-352, GSMA, StoLPaN, ETSI/SCP (Smart Card Platform), GlobalPlatform (i.e. a secure element). In a variant, the hardware component is a Radio-frequency identification (RFID) tag. In one embodiment, a hardware component comprises circuits that enable Bluetooth communications, and/or Wi-fi communications, and/or Zigbee communications, and/or USB communications and/or Firewire communications and/or NFC (for Near Field) communications.
Furthermore, aspects of the present principles can take the form of a computer readable storage medium. Any combination of one or more computer readable storage medium(s) may be utilized.
Thus for example, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable storage media and so executed by a computer or a processor, whether or not such computer or processor is explicitly shown.
Although the present disclosure has been described with reference to one or more examples, a skilled person will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
16305308.5 | Mar 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/056573 | 3/20/2017 | WO | 00 |