A claim of priority under 35 U.S.C. §119 is made to Korean Patent Application No. 10-2012-0019833, filed on Feb. 27, 2012, in the Korean Intellectual Property Office, the entire contents of which are hereby incorporated by reference.
The inventive concept relates to a photographing apparatus, and more particularly, to an apparatus and method for generating depth information and a photographing apparatus including the same.
Generally, image sensors are devices that convert optical signals, including image or distance information, into electrical signals. Image sensors capable of precisely and accurately providing desired information are being actively researched. The research includes three-dimensional (3D) image sensors for providing distance information, as well as image information.
Embodiments of the inventive concept provide an apparatus and a method for generating depth information with greater accuracy through correction than depth data provided from a depth sensor. Embodiments also provide a photographing apparatus including an apparatus for generating depth information with greater accuracy through correction than depth data provided from a depth sensor.
According to an aspect of the inventive concept, there is provided an apparatus for generating depth information, the apparatus including a sensing unit and a final depth providing unit. The sensing unit is configured to sense light received from multiple subjects, and to provide initial depth data having distance information about the subjects and two-dimensional (2D) image data having 2D image information about an image obtained from the subjects. The final depth providing unit is configured to generate estimated depth data having estimated distance information about the subjects by transforming the 2D image data into three-dimensional (3D) data, and to provide final depth data based on the initial depth data and the estimated depth data.
The final depth providing unit may further divide the image into a first area and a second area, and provide the final depth data by combining the initial depth data of the first area with the estimated depth data of the second area. The first area may include a foreground of at least one main subject from among the multiple subjects. The second area may include a background excluding the at least one main subject from among the multiple subjects.
The final depth providing unit may include a first segmentation unit, a transformation unit, an extraction unit, and a combining unit. The first segmentation unit may be configured to divide the image into multiple segments, and to classify the segments into a first area and a second area based on the initial depth data. The transformation unit may be configured to generate the estimated depth data by transforming the 2D image data into the 3D data. The extraction unit may be configured to extract first data corresponding to the first area from the initial depth data, and to extract second data corresponding to the second area from the estimated depth data. The combining unit may be configured to provide the final depth data by combining the first data with the second data.
The transformation unit may include a second segmentation unit, an indexing unit and a depth map generating unit. The second segmentation unit may be configured to divide the image into multiple segments based on a depth cue in the image. The indexing unit may be configured to index depths of the segments based on the initial depth data. The depth map generating unit may be configured to generate a depth map from the indexed depths of the segments. The transformation unit may further including an estimated depth providing unit configured to provide the estimated depth data that is the 3D data based on the depth map.
The 2D image data may include at least one of intensity data and color data.
The sensing unit may include a depth sensor configured to generate the initial depth data and intensity data based on reflected light received from the subjects. Or, the sensing unit may include a depth sensor configured to generate the initial depth data and intensity data based on reflected light received from the subjects, and a color sensor configured to generate color data based on visible light received from the subjects. The sensing unit may include a depth/color sensor configured to simultaneously generate the initial depth data, intensity data, and color data based on reflected light and visible light received from the subjects. The sensing unit may include a time-of-flight (ToF) sensor for providing the initial depth data.
According to another aspect of the inventive concept, there is provided a photographing apparatus, including an image sensor and a processor. The image sensor includes a sensing unit and a final depth providing unit. The sensing unit is configured to sense light received from multiple subjects, and to provide initial depth data having distance information about the subjects and two-dimensional (2D) image data having 2D image information about an image obtained from the subjects. The final depth providing unit is configured to generate estimated depth data having estimated distance information about the subjects by transforming the 2D image data into three-dimensional (3D) data, and to provide final depth data based on the initial depth data and the estimated depth data.
The final depth providing unit may be further configured to divide the image into a first area and a second area, and to provide the final depth data by combining the initial depth data of the first area with the estimated depth data of the second area.
The final depth providing unit may include a first segmentation unit, a transformation unit, an extracting unit, and a combining unit. The first segmentation unit may be configured to divide the image into multiple segments, and to classify the segments into a first area and a second area based on the initial depth data. The transformation unit may be configured to generate the estimated depth data by transforming the 2D image data into 3D data. The extracting unit may be configured to extract first data corresponding to the first area from the initial depth data, and to extract second data corresponding to the second area from the estimated depth data. The combining unit may be configured to provide the final depth data by combining the first data with the second data.
The 2D image data may include at least one of intensity data and color data.
According to another aspect of the inventive concept, there is provided a method of generating depth information about multiple subjects in an image. The method includes sensing light received from the subjects at a sensing unit; providing initial depth data and two-dimensional (2D) image data based on the sensed light received from the, the initial depth data including distance information; dividing the image into segments, and classifying the segments into a first area and a second area based on the initial depth data; generating estimated depth data based on the 2D image data; extracting first data corresponding to the first area from the initial depth data; extracting second data corresponding to the second area from the estimated depth data; and combining the first data and the second data to provide final depth data.
The 2D image data may include intensity data, and generating the estimated depth data may include transforming the intensity data into 3D data. The intensity data may include two-dimensional black-and-white image information about the subjects. The 2D image data may include color data from the received light, and generating the estimated depth data may include transforming the color data into 3D data.
The first area and the second area may include a foreground and a background of the image, respectively.
Illustrative embodiments will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
Embodiments will be described in detail with reference to the accompanying drawings. The inventive concept, however, may be embodied in various different forms, and should not be construed as being limited only to the illustrated embodiments. Rather, these embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the concept of the inventive concept to those skilled in the art. Accordingly, known processes, elements, and techniques are not described with respect to some of the embodiments of the inventive concept. Unless otherwise noted, like reference numerals denote like elements throughout the attached drawings and written description, and thus descriptions will not be repeated. In the attached drawings, sizes of structures may be exaggerated for clarity.
As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Also, the term “exemplary” is intended to refer to an example or illustration.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments of the inventive concept. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that, although the terms first, second, third etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer, or section from another region, layer, or section. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from the teachings of exemplary embodiments.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which exemplary embodiments belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Referring to
The light source 30 generates light EL having a predetermined wavelength, for example, infrared light or near-infrared light, and emits the light EL to the first through third subjects SUB1, SUB2 and SUB3. The light source 30 may be a light-emitting diode (LED) or a laser diode, for example. The light source 30 may be implemented as a device separate from the sensing unit 10, or alternatively, the light source 30 may be implemented such that at least a portion of the light source 30 is included in the sensing unit 10.
The light EL may be controlled by, for example, a control unit (not shown) included in the sensing unit 10, so that intensity (the number of photons per unit area) may periodically change. For example, the intensity of the light EL may be controlled to have a waveform, such a sine wave, a cosine wave, or a pulse wave having continuous pulses, or the like.
The lens unit 40 may include at least one lens (not shown), and concentrate light received from the first through third subjects SUB 1, SUB2 and SUB3, particularly including reflected light RL and/or visible light VL, onto light-receiving areas of the sensing unit 10. For example, distance pixels and/or color pixels formed in pixel arrays (not shown) may be included in the sensing unit 10. In various embodiments, the lens unit 40 may include multiple lenses, and the number of lenses may correspond to the number of sensors (not shown) included in the sensing unit 10. In this case, the lenses may be arranged in various shapes on the same plane. For example, the lenses may be aligned in a horizontal direction or a vertical direction, or arranged in a matrix of rows and columns. Alternatively, the lens unit 40 may include one lens and one or more prisms (not shown), and the number of prisms may correspond to the number of sensors included in the sensing unit 10.
Referring to
The sensing unit 10 senses the reflected light RL and/or the visible light VL concentrated by the lens unit 40. In particular, the sensor 11 included in the sensing unit 10 senses the reflected light RL and/or the visible light VL, and provides initial depth data IZD and two-dimensional (2D) image data 2DD.
The initial depth data IZD includes distance information about the first through third subjects SUB1, SUB2 and SUB3, and the 2D image data 2DD includes 2D image information about an image obtained from the first through third subjects SUB1, SUB2 and SUB3. The initial depth data IZD varies according to distances between the sensing unit 10 and the first through third subjects SUB1, SUB2 and SUB3, whereas the 2D image data 2DD is not related to the distances between the sensing unit 10 and the first through third subjects SUB1, SUB2 and SUB3.
A maximum distance that the sensing unit 10 is able to measure may be determined according to a modulation frequency of the light EL emitted by the light source 30. For example, when the modulation frequency of the light EL is 30 MHz, the sensing unit 10 may measure a maximum distance of 5 m from the sensing unit 10. However, at least one of the first through third subjects SUB1, SUB2, and SUB3 may be located beyond the maximum distance of 5 m from the sensing unit 10. For example, a distance between the sensing unit 10 and the first subject SUB1 from among the first through third subjects SUB1, SUB2, and SUB3 may be 7 m, in which case the sensing unit 10 may measure a distance between the first subject SUB1 and the sensing unit 10 as 2 m, due to a phenomenon referred to as depth folding.
As such, when a distance between the sensing unit 10 and any of the first through third subjects SUB1, SUB2, and SUB3 is relatively large, for example, when the distance between the sensing unit 10 and any of the first through third subjects SUB1, SUB2, and SUB3 is greater than the maximum distance that the sensing unit 10 is able to measure, the sensing unit 10 may not provide accurate distance information. In other words, when a distance between the sensing unit 10 and any of the first through third subjects SUB1, SUB2, and SUB3 is relatively large, for example, in a background of an image, the initial depth data IZD may have an inaccurate value. An example of a relatively large distance between the sensing unit 10 and any of the first through third subjects SUB1, SUB2, and SUB3 is in a background of the image.
In an embodiment, the final depth providing unit 20 generates estimated depth data EZD by transforming the 2D image data 2DD into 3D data, and provides final depth data FZD based on the initial depth data IZD and the estimated depth data EZD. The estimated depth data EZD may have estimated distance information about the first through third subjects SUB1, SUB2, and SUB3. Also, the final depth data FZD may have corrected depth information, that is, depth information with greater accuracy and realistic visual effect about the first through third subjects SUB1, SUB2, and SUB3 than the initial depth data IZD.
The first segmentation unit 21 of the final depth providing unit 20 divides an image into multiple segments, and classifies the segments into a first area AREA1 and a second area AREA2 based on the initial depth data IZD. For example, the first area AREA1 may include a foreground of at least one main subject to be focused from among the first through third subjects SUB1, SUB2, and SUB3, and the second area AREA2 may include a background excluding the at least one main subject from among the first through third subjects SUB1, SUB2, and SUB3.
Referring to
The first segmentation unit 21 may classify the 16 segments into two areas, for example, the first area AREA1 and the second area AREA2, based on the initial depth data IZD. The first segmentation unit 21 may classify an area having a relatively small distance between the sensing unit 10 and the first through third subjects SUB1, SUB2, and SUB3 as the first area AREA1, and an area having a relatively large distance between the sensing unit 10 and the first through third subjects SUB1, SUB2, and SUB3 as the second area AREA2.
More particularly, in the depicted example, the first segmentation unit 21 determines that the segments X22, X23, X32 and X33, each of which has initial depth data IZD lower than a threshold value (e.g., 3), are to be included in the first area AREA1. The first segmentation unit 21 further determines that the segments X11-X14, X21, X24, X31, X34 and X41-X44, each of which has initial depth data IZD greater than the threshold value (e.g., 3), are to be included in the second area AREA2. The determinations are made based on the initial depth data IZD of the 16 segments X11-X14, X21-X24, X31-X34 and X41-X44.
Referring again to
The extraction unit 23 includes first and second extraction units 231 and 232. The first extraction unit 231 extracts first data ZD1 corresponding to the first area AREA1 from the initial depth data IZD, and the second extraction unit 232 extracts second data ZD2 corresponding to the second area AREA2 from the estimated depth data EZD. For example, the first extraction unit 231 may extract the first data ZD1 corresponding to a foreground from the initial depth data IZD, and the second extraction unit 232 may extract the second data ZD2 corresponding to a background from the estimated depth data EZD.
The combining unit 24 provides the final depth data FZD by combining the first data ZD1 and the second data ZD2. When reflectivity of a subject included in the foreground is relatively low, for example, the initial depth data IZD of the subject provided by the sensing unit 10 may not be accurate. In this case, the final depth data FZD, having greater accuracy through correction than the initial depth data IZD, may be generated based on the 2D image data 2DD generated by the sensing unit 10. Thus, according to the present embodiment, since the initial depth data IZD provided by the sensing unit 10 is combined with the estimated depth data EZD generated from the 2D image data 2DD provided by the sensing unit 10, the final depth data FZD has greater accuracy and realistic visual effect through correction than the initial depth data IZD.
Referring to
The initial depth data IZD indicates a distance between the apparatus 1A and any of the first through third subjects SUB1, SUB2, and SUB3, providing perspective. Since the intensity data INT is measured using an intensity of light reflected and/or refracted from the first through third subjects SUB1, SUB2, and SUB3, the first through third subjects SUB1, SUB2, and SUB3 may be distinguished from one another using the intensity data INT. For example, the intensity data INT may have 2D black-and-white image information, such as offset or amplitude, about the first through third subjects SUB1, SUB2, and SUB3.
Referring to
A light-receiving lens 41 concentrates the reflected light RL onto the depth pixel array 111a. The reflected light RL is obtained after the light EL emitted by the light source 30 is reflected from the subject group 2, for example.
The depth pixel array 111a may include depth pixels (not shown) that convert the reflected light RL concentrated by the light-receiving lens 41 into electrical signals. The depth pixel array 111a may provide distance information between the depth sensor 11a and the subject group 2 and 2D black-and-white image information, such as offset or amplitude, about the subject group 2.
The row scanning circuit 112 controls row address and row scanning of the depth pixel array 111a by receiving control signals from the control unit 115. In order to select a corresponding row line from among multiple row lines, the row scanning circuit 112 may apply a signal for activating the corresponding row line to the depth pixel array 111a. The row scanning circuit 112 may include a row decoder that selects a row line in the depth pixel array 111a and a row driver that applies a signal for activating the selected row line.
The ADC unit 113 provides the initial depth data IZD and the intensity data INT by converting an analog signal, such as distance information and 2D black-and-white image information output from the depth pixel array 111a, into a digital signal. The ADC unit 113 may perform column ADC that converts analog signals in parallel using an analog-to-digital converter connected to each of column lines. Alternatively, the ADC unit 13 may perform single ADC that sequentially converts analog signals using a single analog-to-digital converter.
According to embodiments, the ADC unit 113 may include a correlated double sampling (CDS) unit (not shown) for extracting an effective signal component. The CDS unit may perform analog double sampling that extracts an effective signal component based on a difference between an analog reset signal that represents a reset component and an analog data signal that represents a signal component. Alternatively, the CDS unit may perform digital double sampling that converts an analog reset signal and an analog data signal into two digital signals and then extracts a difference between the two digital signals as an effective signal component. Alternatively, the CDS unit may perform dual correlated double sampling that performs both analog double sampling and digital double sampling.
The column scanning circuit 114 controls column address and column scanning of the depth pixel array 111a by receiving control signals from the control unit 115. The column scanning circuit 114 may output a digital output signal output from the ADC unit 113 to a digital signal processing circuit (not shown) or an external host (not shown). For example, the column scanning circuit 114 may sequentially select multiple analog-to-digital converters in the ADC unit 113 by outputting a horizontal scanning control signal to the ADC unit 113. The column scanning circuit 14 may include a column decoder that selects one from among the multiple analog-to-digital converters and a column driver that applies an output of the selected analog-to-digital converter to a horizontal transmission line. In this case, the horizontal transmission line may have a bit width for outputting the digital output signal.
The control unit 115 is configured to control the row scanning circuit 112, the ADC unit 113, the column scanning circuit 114, and the light source 30. More particularly, the control unit 115 may apply control signals, such as a clock signal and a timing control signal, to operate the row scanning circuit 112, the ADC unit 113, the column scanning circuit 114, and the light source 30. The control unit 115 may include a logic control circuit, a phase lock loop (PLL) circuit, a timing control circuit, and a communication interface circuit, for example. Alternatively, a function of the control unit 115 may be performed in a processor, such as a separate engine unit.
Referring to
Referring to
The light EL emitted by the light source 30 may be incident on the depth pixel array 111a included in the depth sensor 11a as the reflected light RL by being reflected by the subject group 2. The depth pixel array 111a may periodically sample the reflected light RL. According to various embodiments, the depth pixel array 111a may sample the reflected light RL at two sampling points having a phase difference of 180 degrees therebetween in each cycle of the reflected light RL (that is, every cycle of the light EL), at four sampling points having a phase difference of 90 degrees therebetween, or at more sampling points. For example, the depth pixel array 111a may extract samples of the reflected light RL at phases of 90, 180, 270, and 360 degrees of the light EL in each cycle.
The reflected light RL has an offset B different from an offset B′ of the light EL emitted by the light source 30 due to additional background light or noise. The offset B of the reflected light RL may be calculated by using Equation 1, in which A0 indicates an intensity of the reflected light RL sampled at a phase of 90 degrees of the light EL, A1 indicates an intensity of the reflected light RL sampled at a phase of 180 degrees of the light EL, A2 indicates an intensity of the reflected light RL sampled at a phase of 270 degrees of the light EL, and A3 indicates an intensity of the reflected light RL sampled at a phase of 360 degrees of the light EL.
Also, the reflected light RL has an amplitude A less than an amplitude A′ of the light EL emitted by the light source 30 due to light loss. The amplitude A of the reflected light RL may be calculated by using Equation 2.
Two dimensional (2D) black-and-white image information about the subject group 2 may be provided based on the amplitude A of the reflected light RL for each of distance pixels included in the depth pixel array 111a.
The reflected light RL is delayed by a phase difference φ between the reflected light RL and the light EL, which is two times the distance between the depth sensor 11a and the subject group 2, from the light EL. The phase difference φ between the reflected light RL and the light EL may be calculated by using Equation 3.
The phase difference φ between the reflected light RL and the light EL corresponds to a TOF of light. The distance between the depth sensor 11a and the subject group 2 may be calculated by using Equation 4, in which R indicates a distance between the depth sensor 11a and the subject group 2 and c indicates a speed of light.
R=c*TOF/2 (4)
Also, the distance R between the depth sensor 11a and the subject group 2 may be calculated using Equation 5 based on the phase difference φ of the reflected light RL, in which f indicates a modulation frequency, that is, the frequency of the light EL (or the reflected light RL).
Although the depth sensor 11a uses the light EL modulated to have a waveform like a sine wave in
Referring to
When reflectivity of a subject included in a foreground is relatively low, the initial depth data IZD of the subject provided by the depth sensor 11a may not be accurate. In this case, according to the inventive concept, the final depth data FZD, having greater accuracy through correction than the initial depth data IZD, may be generated based on the color data CD generated by the color sensor 11b.
Referring to
A light-receiving lens 42 concentrates the visible light VL received from the subject group 2 onto the color pixel array 111b. The color pixel array 111b may include color pixels (not shown) that convert the visible light VL concentrated by the light-receiving lens 42 into electrical signals. The color pixel array 111b may provide 2D color image information, such as RGB, about the first through third subjects SUB1, SUB2, and SUB3.
Referring to
When reflectivity of a subject included in a foreground is relatively low, the initial depth data IZD of the subject provided by the depth/color sensor 11c may not be accurate. In this case, according to the inventive concept, the final depth data FZD having greater accuracy through correction than the initial depth data IZD may be generated based on the color data CD generated by the depth/color sensor 11c.
Referring to
A light-receiving lens 43 concentrates the reflected light RL and the visible light VL received from the subject group 2 onto the depth/color pixel array 111c. The reflected light RL is obtained after the light EL emitted by the light source is reflected from the subject group 2.
The depth/color pixel array 111c may include multiple depth pixels that convert the reflected light RL concentrated by the light-receiving lens 43 into electrical signals, and multiple color pixels that convert the visible light VL concentrated by the light-receiving lens 43 into electrical signals. The depth/color pixel array 111c provides distance information between the depth sensor 11a and the subject group 2, 2D black-and-white image information (e.g., offset or amplitude) about the subject group 2, and 2D color image information (e.g., RGB) about the subject group 2.
The color pixel selection circuits 112b and 114b and the color pixel converter 113b provide the color data CD by controlling color pixels in the pixel array 111c, and the depth pixel selection circuits 112a and 114a and the depth pixel converter 113a provide depth information ZD by controlling distance pixels in the pixel array 111c. The control unit 115 is configured to control the color pixel selection circuits 112b and 114b, the depth pixel selection circuits 112a and 114a, the color pixel converter 113b, and the depth pixel converter 113a.
As such, in order to provide the color data CD, the initial depth data IZD, and the intensity data INT of an image, the sensor unit 11c may include elements for controlling color pixels and elements for controlling distance pixels, which are separately provided and independently operated.
Referring to
The second segmentation unit 221 divided an image, that is, the 2D image data 2DD, into multiple segments based on a depth cue in the image. The second segmentation unit 221 may classify an image into two or more areas based on a depth cue in the image and divide one of the areas into multiple segments. For example, the second segmentation unit 221 may classify an image into two areas, e.g., a foreground and a background, through subject segmentation, and divide the foreground into multiple foreground segments.
The term depth cue refers to any of various types of information indicating a depth. Relative positions of objects in a visible space may be perceived by using the depth cue. For example, a depth cue may include at least one selected from the group consisting of a defocus using a second Gaussian derivative, a linear perspective using vanishing line detection and gradient plane assignment, atmospheric scattering using a light scattering model, shading using energy minimization, a patterned texture using a frontal texel (texture element), symmetric patterns using a combination of photometric and geometric constraints, an occlusion including a curvature using a smoothing curvature and an isophote line and a single transform using a shortest path, and statistical patterns using color-based heuristics and statistical estimators.
The indexing unit 222 indexes depths of the segments based on the initial depth data IZD. More particularly, the indexing unit 222 may index relative depths of the segments based on the initial depth data IZD using the initial depth data IZD as a reference value. As such, the indexing unit 222 may index relative depths of the segments based on a subject having a known initial depth from among the first through third subjects SUB1, SUB2, and SUB3 in the image.
The DM generating unit 223 generates a DM from the indexed depths of the segments. The term DM refers to image information about a 3D distance between a surface of an object and a viewpoint at each pixel in computer graphics.
The estimated depth providing unit 224 provides the estimated depth data EZD that is 3D data based on the DM. More particularly, the estimated depth providing unit 224 may provide image information in the DM as the estimated depth data EZD of the first through third subjects SUB1, SUB2, and SUB3.
Referring to
Referring to
Referring to
Accordingly, the transformation unit 22 provides the estimated depth data EZD by transforming the 2D image data 2DD (or the color data CD) into 3D data using a patterned texture as a depth cue. However, although the transformation unit 22 exemplarily transforms 2D image data into 3D data using a patterned texture as a depth cue, the inventive concept is not limited thereto and the transformation unit 22 may transform 2DD image data into 3D data using any of various depth cues as described above.
Referring to
The first segmentation unit 21 classifies an image obtained from the subject group 2 into first and second areas, for example, a foreground FORE and a background BACK. The transformation unit 22 provides the estimated depth data EZD as shown in
The first extraction unit 231 extracts a portion corresponding to the foreground FORE in the initial depth data IZD, that is, the image as shown in
When the initial depth data IZD illustrated in
Referring to
In operation S100, the light source 30 emits the light EL to multiple subjects, such as the first through third subjects SUB1, SUB2, and SUB3. In operation S110, the lens unit 40 concentrates the reflected light RL received from the first through third subjects SUB1, SUB2, and SUB3 onto the sensing unit 10. In operation S120, the depth sensor 11a provides the initial depth data IZD having distance information about the first through third subjects SUB1, SUB2, and SUB3 and the intensity data INT of the first through third subjects SUB1, SUB2, and SUB3 by sensing the reflected light RL received from the first through third subjects SUB1, SUB2, and SUB3.
In operation S130, the first segmentation unit 21 divides an image into multiple segments, and classifies the segments into the first area AREA1 and the second area AREA2 based on the initial depth data IZD. In operation S140, the transformation unit 22a generates the estimated depth data EZD by transforming the intensity data INT into 3D data.
In operation S150, the first extraction unit 231 extracts the first data ZD1 corresponding to the first area AREA1 from the initial depth data IZD. In operation S160, the second extraction unit 232 extracts the second data ZD2 corresponding to the second area AREA2 from the estimated depth data EZD. In operation S170, the combining unit 24 provides the final depth data FZD by combining the first data ZD1 with the second data ZD2.
Referring to
In operation S200, the light source 30 emits the light EL to multiple subjects, such as the first through third subjects SUB1, SUB2, and SUB3. In operation S210, the lens unit 40 concentrates the reflected light RL received from the first through third subjects SUB1, SUB2, and SUB3 onto the sensing unit 10.
In operation S220, the depth sensor 11a provides the initial depth data IZD having distance information about the first through third subjects SUB1, SUB2, and SUB3 and the intensity data INT of the first through third subjects SUB1, SUB2, and SUB3 by sensing the reflected light RL received from the first through third subjects SUB1, SUB2, and SUB3. In operation S230, the color sensor 11b provides the color data CD of the first through third subjects SUB1, SUB2, and SUB3 by sensing the visible light VL received from the first through third subjects SUB1, SUB2, and SUB3.
In operation S240, the first segmentation unit 21 divides an image into multiple segments, and classifies the segments into a first area and a second area based on the initial depth data IZD. In operation S250, the transformation unit 22b generates the estimated depth data EZD by transforming the color data CD into 3D data.
In operation S260, the first extraction unit 231 extracts first data corresponding to the first area from the initial depth data IZD. In operation S270, the second extraction unit 232 extracts second data corresponding to the second area from the estimated depth data EZD. In operation S280, the combining unit 24 provides the final depth data FZD by combining the first data with the second data.
Referring to
In operation S300, the light source 30 emits the light EL to multiple subjects, such as the first through third subjects SUB1, SUB2, and SUB3. In operation S310, the lens unit 40 concentrates the reflected light RL received from the first through third subjects SUB1, SUB2, and SUB3 onto the sensing unit 10. In operation S320, the depth/color sensor 11c provides the initial depth data IZD having distance information about the first through third subjects SUB1, SUB2, and SUB3 and the intensity data INT of the first through third subjects SUB1, SUB2, and SUB3 by sensing the reflected light RL received from the first through third subjects SUB1, SUB2, and SUB3. The depth/color sensor 11c also provides the color data CD of the first through third subjects SUB1, SUB2, and SUB3 by sensing the visible light VL received from the first through third subjects SUB1, SUB2, and SUB3.
In operation S330, the first segmentation unit 21 divides an image into multiple segments, and classifies the segments into a first area and a second area based on the initial depth data IZD. In operation S340, the transformation unit 22b generates the estimated depth data EZD by transforming the color data CD into 3D data.
In operation S350, the first extraction unit 231 extracts first data corresponding to the first area from the initial depth data IZD. In operation S360, the second extraction unit 232 extracts second data corresponding to the second area from the estimated depth data EZD. In operation S370, the combining unit 24 provides the final depth data FZD by combining the first data with the second data.
Referring to
The image sensor 1100, which is a semiconductor device for converting an optical image into an electrical signal, may include any of the apparatuses 1, 1A, 1B, and 1C as described above with reference to
The processor 1200 includes an image signal processing (ISP) unit 1210, a control unit 1220, and an interface unit 1230. The ISP unit 1210 performs signal processing of received image data including final distance data output from the image sensor 1100. The control unit 1220 outputs a control signal to the image sensor 1100. The interface unit 1230 may transmit the image data on which image processing is performed to a display 1500 to be reproduced by the display 1500.
In
Referring to
The processor 2010 may perform specific arithmetic operations or tasks. According to various embodiments, the processor 2010 may be a microprocessor or a central processing unit (CPU), for example. The processor 2010 communicates with the memory device 2020, the storage device 2030, and the I/O device 2040 via a bus 2060, such as an address bus, a control bus, or a data bus. According to various embodiments, the processor 2010 may be connected to an extended bus, such as a peripheral component interconnect (PCI) bus, for example.
The memory device 2020 may store data needed to operate the computing system 2000. For example, the memory device 2020 may be a dynamic random-access memory (DRAM), a mobile DRAM, a static random-access memory (SRAM), a phase-change random-access memory (PRAM), a ferroelectric random-access memory (FRAM), a resistive random-access memory (RRAM), and/or a magnetoresistive random-access memory (MRAM), for example. Examples of the storage device 2030 include a solid state drive, a hard disk drive, and a compact disk read-only memory (CD-ROM).
The I/O device 2040 may include an input unit, such as a keyboard, a keypad, or a mouse, and an output unit, such as a printer or a display. The power supply 2050 may apply a voltage needed to operate the computing system 2000.
The camera 1000 may be connected to the processor 2010 via the bus 2060 or another communication link to communicate with the processor 2010. As described above, the camera 1000 may provide initial depth data having distance information about multiple subjects and 2D image data having 2D image information about an image obtained from the subjects by sensing reflected light received from the subjects. The camera 1000 may then generate estimated depth data having estimated distance information about the subjects by transforming the 2D image data into 3D data, and provide final depth data based on the initial depth data and the estimated depth data.
The camera 1000 may be packed in any of various types of packages. For example, at least some elements of the photographing apparatus 1000 may be packaged in any of packages, such as package on package (PoP), ball grid arrays (BGAs), chip scale packages (CSPs), plastic leaded chip carrier (PLCC), plastic dual in-line package (PDIP), die in waffle pack, die in wafer form, chip on board (COB), ceramic dual in-line package (CERDIP), plastic metric quad flat pack (MQFP), thin quad flatpack (TQFP), small outline (SOIC), shrink small outline package (SSOP), thin small outline (TSOP), thin quad flatpack (TQFP), system in package (SIP), multi chip package (MCP), wafer-level fabricated package (WFP), and wafer-level processed stack package (WSP), for example.
Meanwhile, the computing system 2000 may be any computing system using a photographing apparatus. Examples of the computing system 2000 include a digital camera, a mobile phone, a persona digital assistant (PDA), a portable multimedia player (PMP) and a smart phone.
Referring to
The CSI host 3112 includes a deserializer DES, and the CSI device 3141 includes a serializer SER. A display serial interface (DSI) host 3111 of the application processor 3110 performs serial communication with a DSI device 3151 of the display 3150 via a DSI.
The DSI host 3111 includes a serializer SER, and the DSI device 3151 may include a deserializer DES. Furthermore, the computing system 3000 further includes a radio-frequency (RF) chip 3160 that communicates with the application processor 3110. A physical layer (PHY) 3113 of the computing system 3000 and a PHY 3161 of the RF chip 3160 may transmit and receive data therebetween according to MIPI DigRF. Also, the application processor 3110 further includes a DigRF master 3114 that controls data transmission/reception according to MIPI DigRF of the PHY 3161.
The computing system 3000 may include a global positioning system (GPS) 3120, a storage unit 3170, a microphone 3180, a DRAM 3185, and a speaker 3190. Also, the computing system 3000 may perform communication by using an ultra-wideband (UWB) 3210, a wireless local area network (WLAN) 3220, and worldwide interoperability for microwave access (WiMAX) 3230. However, an interface and a structure of the computing system 3000 shown are merely exemplarily and the embodiments of the inventive concept are not limited thereto.
According to embodiments of an apparatus for generating depth information, since final depth data is generated based on initial depth data, and estimated depth data is generated from 2D image data, final depth data is generated with greater accuracy and realistic visual effect through correction than the initial depth. More particularly, initial depth data is extracted as data corresponding to an area having a relatively small distance from subjects, estimated depth data is extracted as data corresponding to an area having a relatively large distance from the subjects, and the initial depth data and the estimated depth data are combined with each other in order to generate final depth data having greater accuracy and realistic visual effect.
While the inventive concept has been described with reference to exemplary embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the present inventive concept. Therefore, it should be understood that the above embodiments are not limiting, but illustrative.
Number | Date | Country | Kind |
---|---|---|---|
10-2012-0019833 | Feb 2012 | KR | national |