The present disclosure relates to an image processing apparatus, an image processing method, a program, and an electronic apparatus, and particularly to an image processing apparatus for enabling speed-up of spectroscopic correction processing on a multispectral image and a reduction in the amount of stored data to be achieved, an image processing method, a program, and an electronic apparatus.
There has been conventionally proposed an image processing apparatus capable of achieving speed-up of an image compression encoding processing by use of frequency conversion by downsizing an image and performing a color conversion processing thereon (see Patent Document 1, for example).
There has been conventionally proposed an imaging device for detecting a light in a predetermined narrow wavelength band (narrowband) (also denoted as narrowband light below) by use of a plasmon filter (see Patent Document 2, for example).
Incidentally, a color processing is generally performed after Raw data output from an imaging device is made demosaic. Thus, spectroscopic correction processing performed on an image shot by use of the plasmon filter requires an enormous number of computations caused by multiplying the number of colors (the number of detected lights in narrowband) by the number of pixels, and requires much time. Further, the number of colors increases after the spectroscopic correction processing, and thus the amount of stored data increases.
The present disclosure has been made in terms of such a situation, and is directed to achieving speed-up of the spectroscopic correction processing on a multispectral image and a reduction in the amount of stored data.
An image processing apparatus according to one aspect of the present disclosure includes an image reduction part configured to reduce a multispectral image in which an object is shot by a light dispersed in many wavelength bands, and to generate reduced images for each of the wavelength bands, and a spectroscopic correction processing part configured to perform spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands generated by the image reduction part.
An image processing method or program according to one aspect of the present disclosure includes steps of reducing a multispectral image in which an object is shot by a light dispersed in many wavelength bands, and generating reduced images for each of the wavelength bands, and performing spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands.
An electronic apparatus according to one aspect of the present disclosure includes an imaging device including a metallic thin film filter which is provided closer to a light incident side than a photoelectric conversion device in at least some pixels and which is different in film thickness of a conductive thin film per pixel, an image reduction part configured to reduce a multispectral image obtained by shooting an object by a light dispersed in many wavelength bands by the imaging device, and generate reduced images for each of the wavelength bands, and a spectroscopic correction processing part configured to perform spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands generated by the image reduction part.
According to one aspect of the present disclosure, a multispectral image in which an object is shot by a light dispersed in many wavelength bands is reduced thereby to generate reduced images for each of the wavelength bands, and spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands is performed.
According to one aspect of the present disclosure, it is possible to achieve speed-up of spectroscopic correction processing on a multispectral image and a reduction in the amount of stored data.
Modes for carrying out the invention (denoted as embodiment below) will be described below with reference to the accompanying drawings. Additionally, the description will be made in the following order.
1. Embodiments of shooting apparatus
2. Spectroscopic correction processing and data storage
3. Variants
4. Applications
An embodiment of a shooting apparatus of the present technology will be first described with reference to
A shooting apparatus 10 of
The shooting apparatus 10 includes an optical system 11, an imaging device 12, a memory 13, a signal processing part 14, an output part 15, and a control part 16.
The optical system 11 includes a zoom lens, a focus lens, a diaphragm, and the like (not illustrated), for example, and puts a light from the outside into the imaging device 12. Further, the optical system 11 is provided with various filters such as polarization filter as needed.
The imaging device 12 is configured of a complementary metal oxide semiconductor (CMOS) image sensor, for example. The imaging device 12 receives an incident light from the optical system 11, photoelectrically converts it, and outputs image data corresponding to the incident light.
The memory 13 temporarily stores the image data output by the imaging device 12.
The signal processing part 14 performs signal processing (processing such as noise cancellation, white balance adjustment, and the like) on the image data stored in the memory 13, and supplies the processed image data to the output part 15.
The output part 15 outputs the image data from the signal processing part 14. For example, the output part 15 has a display (not illustrated) configured of liquid crystal or the like, and displays a spectrum (image) corresponding to the image data from the signal processing part 14 as a through image. For example, the output part 15 includes a driver (not illustrated) for driving a recording medium such as semiconductor memory, magnetic disc, or optical disc, and records the image data from the signal processing part 14 into the recording medium. For example, the output part 15 functions as a communication interface for making communication with an external apparatus (not illustrated), and transmits the image data from the signal processing part 14 to an external apparatus in a wireless or wired manner.
The control part 16 controls each part in the shooting apparatus 10 in response to a user operation or the like.
The imaging device 12 includes a pixel array 31, a row scanning circuit 32, a phase locked loop (PLL) 33, a digital analog converter (DAC) 34, a column analog digital converter (ADC) circuit 35, a column scanning circuit 36, and a sense amplifier 37.
A plurality of pixels 51 are two-dimensionally arranged in the pixel array 31.
The pixels 51 are arranged at the points where horizontal signal lines H connected to the row scanning circuit 32 and vertical signal lines V connected to the column ADC circuit 35 cross each other, and each of them includes a photodiode 61 for performing photoelectric conversion and some kinds of transistors for reading an accumulated signal. That is, a pixel 51 includes the photodiode 61, a transfer transistor 62, floating diffusion 63, an amplification transistor 64, a selection transistor 65, and a reset transistor 66 as enlarged on the right of
Charges accumulated in the photodiode 61 are transferred to the floating diffusion 63 via the transfer transistor 62. The floating diffusion 63 is connected to a gate of the amplification transistor 64. When a signal of a pixel 51 is to be read, the selection transistor 65 is turned on from the row scanning circuit 32 via the horizontal signal line H, and the amplification transistor 64 is source follower driven so that the signal of the selected pixel 51 is read to the vertical signal line V as a pixel signal corresponding to the amount of accumulated charges in the photodiode 61. Further, the reset transistor 66 is turned on so that the pixel signal is reset.
The row scanning circuit 32 sequentially outputs a drive signal for driving (transferring, selecting, resetting, and the like, for example) the pixels 51 in the pixel array 31 per row.
The PLL 33 generates and outputs a clock signal with a predetermined frequency required for driving each part in the imaging device 12 on the basis of a clock signal supplied from the outside.
The DAC 34 generates and outputs a lamp signal in a form (substantially saw-like form) in which a voltage lowers from a predetermined voltage value at a certain tilt and then returns to the predetermined voltage value.
The column ADC circuit 35 has as many comparators 71 and counters 72 as the columns of the pixels 51 in the pixel array 31, extracts signal levels of pixel signals output from the pixels 51 in response to the correlated double sampling (CDS) operation, and outputs image data. That is, the comparator 71 compares the lamp signal supplied from the DAC 34 with the pixel signal (luminance value) output from the pixel 51, and supplies the counter 72 with a resultant comparison result signal. The counter 72 then counts a counter clock signal with a predetermined frequency in response to the comparison result signal output from the comparator 71 so that the pixel signal is A/D converted.
The column scanning circuit 36 supplies the counters 72 in the column ADC circuit 35 with signals for sequentially outputting pixel data at predetermined timings.
The sense amplifier 37 amplifies the pixel data supplied from the column ADC circuit 35, and outputs it outside the imaging device 12.
An on-chip microlens 101, an interlayer film 102, a narrowband filter layer 103, an interlayer film 104, a photoelectric conversion device layer 105, and a signal wiring layer 106 are laminated from the top in each pixel 51. That is, the imaging device 12 is configured of a CMOS image sensor of backside irradiation type in which the photoelectric conversion device layer 105 is arranged closer to the light incident side than the signal wiring layer 106.
The on-chip microlens 101 is an optical device for condensing a light to the photoelectric conversion device layer 105 in each pixel 51.
The interlayer film 102 and the interlayer film 104 include dielectric such as SiO2. As described below, it is desirable that the dielectric constants of the interlayer film 102 and the interlayer film 104 are as low as possible.
The narrowband filter layer 103 is provided with a narrowband filter NB as an optical filter for transmitting a narrowband light in a predetermined narrow wavelength band (narrowband) in each pixel 51. For example, a plasmon filter using surface plasmon, which is a kind of metallic thin film filter using a thin film including metal such as aluminum, is used for the narrowband filter NB. Further, the transmission band of the narrowband filter NB is set per pixel 51. The kinds of transmission bands (the number of bands) of the narrowband filter NB is arbitrary, and is set at four or more, for example.
Here, the narrowband is a wavelength band narrower than the transmission bands of the conventional color filters of R (red), G (green), and B (blue), or Y (yellow), M (magenta), and C (cyan) based on the three primary colors or the color-matching function, for example. Further, a pixel which receives a narrowband light transmitting through the narrowband filter NB will be denoted as multispectral pixel or MS pixel below.
The photoelectric conversion device layer 105 includes the photodiode 61 and the like of
The signal wiring layer 106 is provided with a wiring or the like for reading the charges accumulated in the photoelectric conversion device layer 105.
A plasmon filter capable of being used for the narrowband filter NB will be described below with reference to
The plasmon filter 121A is configured of a plasmon resonator in which holes 132A are arranged in a metallic thin film (denoted as conductive thin film below) 131A in a honeycomb shape.
Each hole 132A penetrates through the conductive thin film 131A, and operates as a waveguide. Generally, a cutoff frequency and a cutoff wavelength, which are determined depending on the shape such as the lengths of the sides or the diameter, are present for the waveguide, and the waveguide has the nature that it does not propagate a light with a frequency than the cutoff frequency (a wavelength more than the cutoff wavelength). The cutoff wavelength of the hole 132A mainly depends on an opening diameter D1, and as the opening diameter D1 is smaller, the cutoff wavelength is also shorter. Additionally, the opening diameter D1 is set to be smaller than the wavelength of a light to be transmitted.
On the other hand, when a light is incident into the conductive thin film 131A in which the holes 132A are periodically formed at a short cycle than the wavelength of the light, there occurs a phenomenon that a light with a longer wavelength than the cutoff wavelength of the holes 132A transmits. The phenomenon is denoted as plasmon abnormal transmission phenomenon. The phenomenon occurs when surface plasmon is excited on the border between the conductive thin film 131A and the interlayer film 102 arranged thereon.
The conditions under which the plasmon abnormal transmission phenomenon (surface plasmon resonance) occurs will be described herein with reference to
εd indicates a dielectric constant of the dielectric configuring the interlayer film 102.
In Equation (1), the surface plasma frequency ωsp is higher as the plasma frequency ωp is higher. Further, the surface plasma frequency ωsp is higher as the dielectric constant εd is lower.
The line L1 indicates a dispersion relationship of light (light line), and is expressed in the following Equation (2).
c indicates a light speed.
The line L2 indicates a dispersion relationship of surface plasmon, and is expressed in the following Equation (3).
εm indicates a dielectric constant of the conductive thin film 131A.
The dispersion relationship of surface plasmon indicated by the line L2 asymptotically approaches the light line indicated by the line L1 in a range where the angular wave number vector k is small, and asymptotically approaches the surface plasma frequency ωsp as the angular wave number vector k is larger.
Then, when the following Equation (4) is established, the plasmon abnormal transmission phenomenon occurs.
A indicates a wavelength of an incident light. θ indicates an incident angle of an incident light. Gx and Gy are expressed in the following Equation (5).
|Gx|=|Gy|=2π/a0 (5)
a0 indicates a lattice constant of the hole array structure configured of the holes 132A in the conductive thin film 131A.
The left side in Equation (4) indicates an angular wave number vector of surface plasmon, and the right side indicates an angular wave number vector of the hole array cycle of the conductive thin film 131A. Thus, when the angular wave number vector of surface plasmon equals to the angular wave number vector of the hole array cycle of the conductive thin film 131A, the plasmon abnormal transmission phenomenon occurs. Then, the value of λ at this time is a plasmon resonance wavelength (transmission wavelength of the plasmon filter 121A).
Additionally, the angular wave number vector of surface plasmon in the left side of Equation (4) is determined by the dielectric constant εm of the conductive thin film 131A and the dielectric constant εd of the interlayer film 102. On the other hand, the angular wave number vector of the hole array cycle in the right side is determined by the light incident angle θ, and a pitch (hole pitch) P1 between adjacent holes 132A in the conductive thin film 131A. Thus, the resonance wavelength and the resonance frequency of plasmon are determined by the dielectric constant εm of the conductive thin film 131A, the dielectric constant εd of the interlayer film 102, the light incident angle θ, and the hole pitch P1. Additionally, in a case where the light incident angle is 0°, the resonance wavelength and the resonance frequency of plasmon are determined by the dielectric constant εm of the conductive thin film 131A, the dielectric constant εd of the interlayer film 102, and the hole pitch P1.
Therefore, the transmission band of the plasmon filter 121A (the resonance wavelength of plasmon) changes due to the material and film thickness of the conductive thin film 131A, the material and film thickness of the interlayer film 102, the pattern cycle of the hole array (the opening diameter D1 and the hole pitch P1 of the holes 132A, for example), and the like. Particularly in a case where the materials and film thicknesses of the conductive thin film 131A and the interlayer film 102 are determined, the transmission band of the plasmon filter 121A changes due to the pattern cycle of the hole array, particularly the hole pitch P1. That is, the transmission band of the plasmon filter 121A shifts toward the shorter wavelength side as the hole pitch P1 is narrower, and the transmission band of the plasmon filter 121A shifts toward the longer wavelength side as the hole pitch P1 is wider.
In a case where the hole pitch P1 is set at 250 nm, the plasmon filter 121A mainly transmits a light in the blue wavelength band. In a case where the hole pitch P1 is set at 325 nm, the plasmon filter 121A mainly transmits a light in the green wavelength band. In a case where the hole pitch P1 is set at 500 nm, the plasmon filter 121A mainly transmits a light in the red wavelength band. In a case where the hole pitch P1 is set at 500 nm, however, the plasmon filter 121A transmits more lights in lower wavelength bands than red due to the waveguide mode described below.
Additionally, the transmissivity of the plasmon filter 121A is mainly determined by the opening diameter D1 of the holes 132A. As the opening diameter D1 is larger, the transmissivity is higher while color mixture easily occurs. Generally, it is desirable that the opening diameter D1 is set such that the aperture is 50% to 60% of the hole pitch P1.
Further, as described above, each hole 132A in the plasmon filter 121A operates as a waveguide. Thus, not only the wavelength component transmitted due to surface plasmon resonance (the wavelength component in the plasmon mode) but also the wavelength component transmitting through the holes 132A (the wavelength component in the waveguide mode) may be larger in the spectroscopic characteristics depending on a pattern of the hole array of the plasmon filter 121A.
As described above, the cutoff wavelength mainly depends on the opening diameter D1 of the holes 132A, and the cutoff wavelength is also shorter as the opening diameter D1 is shorter. Then, as a difference between the cutoff wavelength and a peak wavelength in the plasmon mode is made larger, the wavelength resolution characteristics of the plasmon filter 121A enhance.
Further, as described above, as the plasma frequency ωp of the conductive thin film 131A is higher, the surface plasma frequency ωsp of the conductive thin film 131A is higher. Further, as the dielectric constant εd of the interlayer film 102 is lower, the surface plasma frequency ωsp is higher. Then, as the surface plasma frequency ωsp is higher, the plasmon resonance frequency can be set to be higher, and the transmission band (plasmon resonance wavelength) of the plasmon filter 121A can be set in a shorter wavelength band.
Thus, when a metal with the lower plasma frequency ωp is used for the conductive thin film 131A, the transmission band of the plasmon filter 121A can be set in a shorter wavelength band. For example, aluminum, silver, gold, and the like are preferable. However, in a case where the transmission band is set in a longer wavelength band such as infrared ray, copper and the like can be used.
Further, when a dielectric with the lower dielectric constant εd is used for the interlayer film 102, the transmission band of the plasmon filter 121A can be set in a shorter wavelength band. For example, SiO2, Low-K, and the like are preferable.
Further,
A propagation distance ∧SPP(λ) in the depth direction of the surface plasmon is expressed in the following Equation (6).
kSPP indicates an absorption coefficient of a material propagated by the surface plasmon. εm(λ) indicates a dielectric constant of the conductive thin film 131A relative to a light with a wavelength λ. εd(λ) indicates a dielectric constant of the interlayer film 102 relative to the light with the wavelength λ.
Thus, as illustrated in
Further, the surface plasmon relative to the light with a wavelength of 400 nm propagates down to about 10 nm in the depth direction from the surface of the conductive thin film 131A including aluminum. Thus, the thickness of the conductive thin film 131A is set at 10 nm or more so that the interlayer film 104 is prevented from influencing the surface plasmon on the interface between the interlayer film 102 and the conductive thin film 131A.
Other exemplary plasmon filters will be described below with reference to
A plasmon filter 121B in A of
Further, all the holes do not need to penetrate through the conductive thin film in the plasmon resonator, and the plasmon resonator functions as a filter even if some holes are configured as non-through holes which do not penetrate through the conductive thin film.
For example, B of
Further, the plasmon filter basically uses a single-layer plasmon resonator, but may be configured of a double-layer plasmon resonator, for example.
For example, a plasmon filter 121D illustrated in
Further, an interval D2 between the plasmon filter 121D-1 and the plasmon filter 121D-2 is preferably set at about ¼ of the peak wavelength of the transmission band. Further, the interval D2 is more preferably at ½ or less of the peak wavelength of the transmission band in consideration of a degree of freedom of design.
Additionally, as in the plasmon filter 121D, the holes may be arranged in the same pattern in the plasmon filter 121D-1 and the plasmon filter 121D-2, and additionally the holes may be arranged in the mutually similar patterns in the double-layer plasmon resonator structure, for example. Further, in the double-layer plasmon resonator structure, holes and dots may be arranged in a pattern in which the hole array structure and the dot array structure (described below) are inverted. Furthermore, the plasmon filter 121D is in the double-layer structure, but may be multilayered in three or more layers.
Further, the exemplary configurations of the plasmon filters configured of a plasmon resonator in a hole array structure have been described above, but a plasmon resonator in a dot array structure may be employed for the plasmon filter.
A plasmon filter in a dot array structure will be described with reference to
A plasmon filter 121A′ in A of
The plasmon filter 121A′ absorbs a light in a predetermined wavelength band, and thus is used as a complementary color-based filter. The wavelength band (denoted as absorption band below) of a light absorbed by the plasmon filter 121A′ changes due to a pitch (denoted as dot pitch below) P3 between adjacent dots 133A, or the like. Further, a diameter D3 of the dots 133A is adjusted according to the dot pitch P3.
A plasmon filter 121B′ in B of
The absorption band of the plasmon filter 121B′ changes due to a dot pitch P4 between adjacent dots 133B, or the like. Further, the diameter D3 of the dots 133B is adjusted according to the dot pitch P4.
As illustrated, as the dot pitch P3 is narrower, the absorption band of the plasmon filter 121A′ shifts toward the shorter wavelength side, and as the dot pitch P3 is wider, the absorption band of the plasmon filter 121A′ shifts toward the longer wavelength side.
Additionally, the transmission band or the absorption band can be adjusted only by adjusting a pitch of holes or dots in the plane direction also in any plasmon filter in the hole array structure and in the dot array structure. Thus, for example, the transmission band or the absorption band can be individually set per pixel only by adjusting a pitch of holes or dots in a lithography step, thereby achieving more colors of the filter in less steps.
Further, the thickness of the plasmon filter is almost similar to an organic material-based color filter at about 100 to 500 nm, and is excellent in process affinity.
Further, the narrowband filter NB can employ a plasmon filter 151 using guided mode resonant (GMR) illustrated in
A conductive layer 161, an SiO2 film 162, an SiN film 163, and an SiO2 substrate 164 are laminated from the top in the plasmon filter 151. The conductive layer 161 is included in the narrowband filter layer 103 of
Rectangular conductive thin films 161A including aluminum, for example, are arranged at a predetermined pitch P5 on the conductive layer 161 such that the long sides of the conductive thin films 161A are adjacent. Then, the transmission band of the plasmon filter 151 changes due to the pitch P5 or the like.
The plasmon filter 151 using GMR is also excellent in affinity with an organic material-based color filter similarly to the plasmon filters in the hole array structure and in the dot array structure.
A second embodiment of the imaging device 12 of
The imaging device 12B is different from the imaging device 12A in that a color filter layer 107 is laminated between the on-chip microlens 101 and the interlayer film 102.
The narrowband filters NB are provided not in all the pixels 51 but in some pixels 51 in the narrowband filter layer 103 in the imaging device 12B. The kinds of transmission bands (the number of bands) of the narrowband filter NB are arbitrary, and are set at 1 or more, for example.
Each pixel 51 is provided with a color filter in the color filter layer 107. For example, any of general red filter R, green filter G, and blue filter B (not illustrated) is provided in a pixel 51 which is not provided with the narrowband filter NB. Thereby, for example, R pixels provided with the red filter R, G pixels provided with the green filter G, B pixels provided with the blue filter, and MS pixels provided with the narrowband filter NB are arranged in the pixel array 31.
Further, a transmission filter P is provided in the color filter layer 107 in a pixel 51 provided with the narrowband filter NB. The transmission filter P is configured of an optical filter (low-pass filter, high-pass filter, or bandpass filter) for transmitting a light in a wavelength band including the transmission band of the narrowband filter NB of the same pixel 51.
Additionally, the color filter provided in the color filter layer 107 may be organic material based or inorganic material based.
An organic material-based color filter is dye/colorant based using synthetic resin or native protein, and pigment-containing based using pigment or dye, for example.
An inorganic material-based color filter employs a material such as TiO2, ZnS, SiN, MgF2, SiO2, Low-k, and the like. Further, an inorganic material-based color filter is formed in a method such as deposition, sputtering, chemical vapor deposition (CVD) film formation, or the like.
Further, the interlayer film 102 is set at a film thickness capable of preventing the color filter layer 107 from influencing the surface plasmon on the interface between the interlayer film 102 and the narrowband filter layer 103 as described above with reference to
An occurrence of flares is restricted by the transmission filter P provided in the color filter layer 107. This point will be described with reference to
In this example, the imaging device 12A is provided in a semiconductor chip 203. Specifically, the semiconductor chip 203 is mounted on a substrate 213, and its surrounding is covered with seal glass 211 and resin 212. Then, a light transmitting through a lens 201 and an IR cut filter 202 provided in the optical system 11 of
Here, in a case where the narrowband filter NB in the narrowband filter layer 103 in the imaging device 12A is configured of a plasmon filter, a metallic conductive thin film is formed in the plasmon filter. The conductive thin film is high in its reflectivity, and thus easily reflects a light with a wavelength outside the transmission band. Then, part of a light reflected on the conductive thin film is reflected on the seal glass 211, the IR cut filter 202, or the lens 201, and is incident into the imaging device 12A again as illustrated in
The use of an antireflective film including a different metal from the conductive thin film or a high-dielectric material, for example, is considered in order to prevent the reflected light. However, the plasmon filter uses surface plasmon resonance, and if such an antireflective film contacts on the surface of the conductive thin film, the characteristics of the plasmon filter can be deteriorated or desired characteristics can be difficult to obtain.
On the other hand,
The example of
As described above, the transmission filter P is provided above the narrowband filter NB (toward the light incident side) in the imaging device 12B. Thus, a light incident into the imaging device 12B is cut off in its predetermined wavelength band by the transmission filter P and is then incident into the narrowband filter NB, and thus the amount of the incident light into the narrowband filter NB is restricted. Consequently, the amount of reflected light on the conductive thin film in the narrowband filter NB (plasmon filter) also reduces, and thus flares reduce.
The line L41 in
The line L51 of
The line L61 in
Additionally, in a case where the transmission band of the red filter R, the green filter G, or the blue filter B includes the transmission band of the narrowband filter NB in the lower layer, the filter may be used for the transmission filter P.
Further, the example of
Further, combinations of colors of the color filters in the color filter layer 107 are not limited to the above examples, and can be arbitrarily changed.
Further, in a case where a solution for flares is not required, for example, the transmission filter P may not be provided above the narrowband filter NB, or a dummy filter for transmitting lights with all the wavelengths may be provided.
A third embodiment of the imaging device 12 of
The imaging device 12C is different from the imaging device 12A in that a filter layer 108 is provided instead of the narrowband filter layer 103. Further, the imaging device 12C is different from the imaging device 12B of
Thereby, in a case where R pixels, G pixels, B pixels, and MS pixels are arranged in the pixel array 31 in the imaging device 12C, the color filter layer 107 can be omitted.
Additionally, in a case where an organic material-based color filter is used, the narrowband filter NB is earlier formed and a high-temperature final thermal processing such as sinter processing is performed, and then the color filter is formed in order to prevent a damage of the color filter due to heat, or the like, for example. On the other hand, in a case where an inorganic material-based color filter is used, the limitation of the above formation order is not basically required.
Further, in a case where a solution for flares is made as in the imaging device 12B of
Spectroscopic correction processing and data storage for an image (multispectral image) output from the imaging device 12 of
The shooting apparatus 10 of
An image processing part 301 illustrated in
The luminance image extraction part 311 extracts a luminance image having the same size as shot by the imaging device 12 from Raw data of the multispectral image output from the imaging device 12, and stores it in the storage 302. Further, the luminance image extraction part 311 reads, from the storage 302, the luminance image corresponding to the multispectral image to be output from the shooting apparatus 10, and supplies it to the resolution increase processing part 315.
The image reduction part 312 performs processing of reducing an image per wavelength band configured of the Raw data of the multispectral image output from the imaging device 12, thereby generating reduced images per wavelength band. Here, the image reduction part 312 performs processing of reducing an image such that the pixels in each wavelength band configuring the multispectral image are arranged in the same spatial phase as described below with reference to
The spectroscopic correction processing part 313 performs spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images per wavelength band supplied from the image reduction part 312. Here, in the spectroscopic correction processing, as described below in
The image enlargement part 314 reads the reduced images per wavelength band corresponding to the multispectral image to be output from the shooting apparatus 10 from the storage 302 via the spectroscopic correction processing part 313. The image enlargement part 314 then performs processing of enlarging the reduced images to the same size as shot by the imaging device 12, thereby generating enlarged images per wavelength band. The image enlargement part 314 then supplies the enlarged images per wavelength band to the resolution increase processing part 315.
The resolution increase processing part 315 performs a resolution increase processing on the enlarged images per wavelength band supplied from the image enlargement part 314 by use of the luminance image supplied from the luminance image extraction part 311. Thereby, the resolution increase processing part 315 generates high-resolution images per wavelength band at a higher resolution than the enlarged images, and outputs them to the outside of the image processing part 301.
The image processing part 301 is configured in this way, and the spectroscopic correction processing part 313 performs the spectroscopic correction processing on the reduced images per wavelength band generated by the image reduction part 312 thereby to store the reduced images per wavelength band subjected to the spectroscopic correction processing in the storage 302. Thereby, the image processing part 301 can achieve further speed-up of the spectroscopic correction processing and can achieve a further reduction in the amount of stored data than in the configuration in which the non-recued multispectral image is subjected to the spectroscopic correction processing and is stored, for example.
Further, the luminance image is stored in association with the reduced images per wavelength band in the image processing part 301, and thus the resolution increase processing part 315 can perform the resolution increase processing by use of the luminance image. Thereby, the image processing part 301 can restrict an adverse impact on the resolution due to the stored reduced images per wavelength band, or a deterioration in image quality of the multispectral image.
Here, the image processing part 301 may perform the image enlargement by the image enlargement part 314 and the resolution increase by the resolution increase processing part 315 at the same time. For example, the collation processing using the following Equation (7) is performed to multiply the reduced images by the edge component of the luminance image, thereby performing the image enlargement and the resolution increase at the same time.
[Math. 6]
Outλ(x, y)=(inλ(x, y)×luma(x, y)/lpf(luma(x, y)) (7)
For example, as indicated in Equation (7), the pixel value Outλ(x, y) of the high resolution image output from the image processing part 301 can be calculated by use of the pixel value inλ(x, y) of the reduced images read from the storage 302, the pixel value luma (x, y) of the luminance image, and the low-pass filter lpf( ).
Alternatively, the image processing part 301 may use a guided filter capable of performing high-accuracy image interpolation by use of the pixel value of a reference image, for example, for the image enlargement by the image enlargement part 314 and the resolution increase by the resolution increase processing part 315. Additionally, the guided filter is described in detail in Non-Patent Document 1.
The processing of reducing an image such that pixels in each wavelength band configuring a multispectral image are arranged in the same spatial phase by the image reduction part 312 will be described with reference to
As illustrated, the center of the 16 pixels arranged in 4×4 in a matrix is assumed as a spatial phase after the image is reduced. Then, the pixel value of each color in the spatial phase is found by multiplying each pixel value by the coefficient depending on the reciprocal of the number of pixels with the same color included in the 16 pixels and a weight depending on the distance from the center, and integrating the respective pixel values per color. Here, in the example illustrated in
With the calculation using the coefficient, the image reduction part 312 can reduce the 16 pixels arranged in 4×4 to be arranged at their center spatial phase, and can generate the reduced images per color (wavelength band). Then, the reduced images are used such that the pixels in each wavelength band are arranged in the same spatial phase, thereby preventing an occurrence of false color, for example.
The vertical axes indicate an intensity of light and the horizontal axes indicate a wavelength (λ) in the upper part and the lower part of
The spectroscopic correction processing part 313 performs the spectroscopic correction processing on the multispectral image (also the reduced images by the image reduction part 312) shot by the imaging device 12 by use of the matrix with the transmission characteristics TMN illustrated in the middle part of
That is, a spectroscopic distribution of lights incident into the imaging device 12 is not necessarily correctly reproduced for each wavelength signal output from the imaging device 12 employing a plasmon filter. Thus, the spectroscopic correction processing part 313 makes matrix calculation (see the middle part of
Further, the number of colors (the number of wavelengths) output from the imaging device 12 is limited to the number of colors of the filters arranged in the imaging device 12. Thus, it is possible to increase the number of colors obtained by the spectroscopic correction processing part 313 (for example, to increase the number of colors from M in the upper part of
For example, the processing is started when Raw data of a multispectral image shot by the imaging device 12 of
In step S12, the image reduction part 312 performs the processing of reducing an image per wavelength band configured of the Raw data of the multispectral image, generates reduced images per wavelength band, and supplies them to the spectroscopic correction processing part 313.
In step S13, the spectroscopic correction processing part 313 performs the spectroscopic correction processing on the reduced images per wavelength band supplied from the image reduction part 312 in step S12.
In step S14, the luminance image extracted from the Raw data of the multispectral image in step S11 and the reduced images per wavelength band subjected to the spectroscopic correction processing in step S13 are associated with each other and are stored in the storage 302, and then the processing is terminated.
As described above, the image processing part 301 performs the spectroscopic correction processing not on the Raw data itself of the multispectral image but on the reduced images per wavelength band, thereby speeding up the spectroscopic correction processing and achieving a reduction in the amount of stored data.
For example, the processing is started when the operation part (not illustrated) is operated to instruct to output the multispectral image stored in the storage 302 from the shooting apparatus 10. In step S21, the image enlargement part 314 obtains, from the storage 302, the reduced images per wavelength band corresponding to the multispectral image to be output from the shooting apparatus 10. The image enlargement part 314 then performs the processing of enlarging the reduced images per wavelength band, generates enlarged images per wavelength, and supplies them to the resolution increase processing part 315.
In step S22, the resolution increase processing part 315 obtains, from the storage 302, the luminance image corresponding to the multispectral image to be output from the shooting apparatus 10. In step S21, the resolution increase processing part 315 then performs the resolution increase processing on the enlarged images per wavelength band supplied from the image enlargement part 314 by use of the luminance image, and generates and outputs high-resolution images per wavelength band.
As described above, the image processing part 301 performs the resolution increase processing on the enlarged images per wavelength band by use of the luminance image, thereby preventing an adverse impact on the resolution due to the stored reduced images per wavelength band for the multispectral image output from the shooting apparatus 10.
Further, a spectroscopic distribution of lights incident into the imaging device 12 is not necessarily correctly reproduced in the configuration in which the plasmon filter as described above is employed for the imaging device 12 in the shooting apparatus 10. It is therefore advantageous that the spectroscopic correction processing part 313 in the image processing part 301 performs the spectroscopic correction processing thereby to enhance the reproducibility of the spectroscopic distribution. Particularly, the spatial resolution lowers with an increase in the number of colors of the imaging device 12, and it is therefore advantageous that the number of colors is increased by the spectroscopic correction processing in the image processing part 301 thereby to avoid a reduction in the spatial resolution. Then, the image processing part 301 can achieve speed-up of the processing by use of the reduced images as described above, which is advantageous to be applied to the processing on the multispectral image output from the imaging device 12 employing a plasmon filter.
A central processing unit (CPU) 401, a read only memory (ROM) 402, a random access memory (RAM) 403, and an electronically erasable and programmable read only memory (EEPROM) 404 are mutually connected via a bus 405 in the computer. The bus 405 is further connected with an I/O interface 406, and the I/O interface 406 is connected to the outside (such as the memory 13, the output part 15, or the like).
In the thus-configured computer, the CPU 401 loads and executes the programs stored in the ROM 402 and the EEPROM 404, for example, into the RAM 403 via the bus 405 so that the processing is performed. Further, the programs executed by the computer (the CPU 401) can be previously written in the ROM 402, and additionally can be installed in the EEPROM 404 from the outside via the I/O interface 405 or updated.
Additionally, the image processing part 301 can be mounted on the signal processing part 14 in the shooting apparatus 10 of
Variants of the embodiments of the present technology described above will be described below.
For example, three or more kinds of film thicknesses of the conductive thin film may be set depending on a hole pitch (transmission band).
Further, the thickness of the conductive thin film (dot) may be changed depending on a dot pitch (absorption band) also in a plasmon filter in a dot array structure.
Specifically, as illustrated in
Further, as the conductive thin film configuring the dots is smaller, basically the absorption rate is lower and the peak width and the half bandwidth of the absorption band are smaller. To the contrary, as the conductive thin film configuring the dots is larger, basically the peak width and the half bandwidth of the absorption band are larger and the absorption rate is higher.
Therefore, for example, even if the peak width and the half bandwidth of the absorption band are slightly larger as the dot pitch of a plasmon filter is narrower and the absorption band is shorter, it is desirable that the conductive thin film is thickened and the absorption rate is increased. On the other hand, even if the absorption rate is slightly lower as the dot pitch of a plasmon filter is wider and the absorption band is longer, it is desirable that the conductive thin film is thinned and the peak width and the half bandwidth of the transmission band are reduced.
Furthermore, for example, the film thickness of the conductive thin film may be changed per pixel in a plasmon filter with the same transmission band (hole pitch) or absorption band (dot pitch). Thereby, it is possible to provide pixels with the same transmission band or absorption band and different sensitivities and absorption rates. Therefore, for example, an accuracy of detecting narrowband lights of some pixels can be enhanced.
Further, the present technology can be applied to not only the CMOS image sensor of backside irradiation type described above but also other imaging devices using a plasmon filter. For example, the present technology can be applied to CMOS image sensors of surface irradiation type; charge coupled device (CCD) image sensors; image sensors in a photoconductor structure including organic photoelectric conversion film, quantum dot structure, or the like; and the like.
Further, the present technology can be applied to a solid-state shooting apparatus of laminated type illustrated in
A of
B of
In B of
In C of
Further, the present technology can be applied to a metallic thin film filter using a metallic thin film other than plasmon filter, and possible applications are assumed such as an application to photonic crystal using a semiconductor material.
Applications of the present technology will be described below.
The present technology can be applied in various cases for sensing a ray such as visible ray, infrared ray, ultraviolet ray, or X ray as illustrated in
Apparatuses for shooting images to be viewed such as digital camera or portable apparatus with camera function
Traffic apparatuses for safe driving such as automatic stop, or recognition of driver's state, for example, such as vehicle-mounted sensor for shooting in front of, behind, around, inside an automobile, or the like, monitoring camera for monitoring traveling vehicle or road, or distance measuring sensor for measuring inter-vehicle distance or the like
Home electronics such as TV, refrigerator, and air conditioner for shooting user's gesture and performing a device operation according to the gesture
Medical-care or healthcare apparatuses such as endoscope, or apparatus for performing angiography by received infrared ray
Security apparatus such as monitoring camera for crime prevention, or camera for person authentication
Beauty care apparatuses such as skin measurement device for shooting the skin, or microscope for shooting the scalp
Sports apparatuses such as action camera or wearable camera for sports or the like
Agricultural apparatus such as camera for monitoring state of field or crops
More specific applications will be described below.
For example, the transmission band of the narrowband filter NB of each pixel 51 in the shooting apparatus 10 of
For example,
For example, the peak wavelength of the detection band is in a range of 580 to 630 nm and the half bandwidth is in a range of 30 to 50 nm in a case where myoglobin indicating umami of tuna, beef, or the like is detected. The peak wavelength of the detection band is 980 nm and the half bandwidth is in a range of 50 to 100 nm in a case where oleic acid indicating freshness of tuna, beef, or the like is detected. The peak wavelength of the detection band is in a range of 650 to 700 nm and the half bandwidth is in a range of 50 to 100 nm in a case where chlorophyll indicating freshness of leaf vegetable such as brassica rapa is detected.
For example, the peak wavelength of the detection band is 880 nm and the half bandwidth is in a range of 20 to 30 nm in a case where a pulp optical path length indicating sugar content of raiden as a kind of melon is detected. The peak wavelength of the detection band is 910 nm and the half bandwidth is in a range of 40 to 50 nm in a case where sucrose indicating sugar content of raiden is detected. The peak wavelength of the detection band is 915 nm and the half bandwidth is in a range of 40 to 50 nm in a case where sucrose indicating sugar content of raiden red as another kind of melon is detected. The peak wavelength of the detection band is 955 nm and the half bandwidth is in a range of 20 to 30 nm in a case where water indicating sugar content of raiden red is detected.
The peak wavelength of the detection band is 912 nm and the half bandwidth is in a range of 40 to 50 nm in a case where sucrose indicating sugar content of apple is detected. The peak wavelength of the detection band is 844 nm and the half bandwidth is 30 nm in a case where water of mandarin orange is detected. The peak wavelength of the detection band is 914 nm and the half bandwidth is in a range of 40 to 50 nm in a case where sucrose indicating sugar content of mandarin orange is detected.
For example, the peak wavelength of the detection band is 1669 nm and the half bandwidth is in a range of 30 to 50 nm in a case where poly ethylene terephthalate (PET) is detected. The peak wavelength of the detection band is 1688 nm and the half bandwidth is in a range of 30 to 50 nm in a case where poly styrene (PS) is detected. The peak wavelength of the detection band is 1735 nm and the half bandwidth is in a range of 30 to 50 nm in a case where poly ethylene (PE) is detected. The peak wavelength of the detection band is in a range of 1716 to 1726 nm and the half bandwidth is in a range of 30 to 50 nm in a case where poly vinyl cloride (PVC) is detected. The peak wavelength of the detection band is in a range of 1716 to 1735 nm and the half bandwidth is in a range of 30 to 50 nm in a case where polyepropylene (PP) is detected.
Further, the present technology can be applied to manage freshness of cut flowers, for example.
Further, the present technology can be applied to inspect a foreign material mixed into food, for example. The present technology can be applied to detect a foreign material such as peel, shell, stone, leaf, branch, and piece of wood mixed into nuts such as almond, blueberry, and walnut, fruits, and the like, for example. Further, the present technology can be applied to detect a foreign material such as plastic piece mixed into processed food, beverage, and the like, for example.
Further, the present technology can be applied to detect normalized difference vegetation index (NDVI) as an index of vegetation, for example.
Further, the present technology can be applied to detect a person on the basis of one or both of a spectroscopic shape at a wavelength of around 580 nm derived from hemoglobin of human skin and a spectroscopic shape at a wavelength of around 960 nm derived from melanin pigment included in human skin, for example.
Further, the present technology can be applied for biometric sensing (biometric authentication), user interface, prevention of falsification of sign and the like, monitoring, and the like, for example.
Further, the technology according to the present disclosure (the present technology) may be applied to an endoscopic surgery system, for example.
The endoscope 11100 is configured of a lens tube 11101 the region of which at a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base of the lens tube 11101. In the illustrated example, the endoscope 11100 configured as a rigid scope having the hard lens tube 11101 is illustrated, but the endoscope 11100 may be configured as a flexible scope having a flexible lens tube.
An opening with an objective lens fitted is provided at the tip of the lens tube 11101. A light source apparatus 11203 is connected to the endoscope 11100, and a light generated by the light source apparatus 11203 is guided to the tip of the lens tube by a light guide extending into the lens tube 11101, and is irradiated toward an object to be observed in the body cavity of the patient 11132 via the objective lens. Additionally, the endoscope 11100 may be a direct-viewing lens, or may be an oblique-viewing lens or side-viewing lens.
An optical system and an imaging device are provided inside the camera head 11102, and a reflected light (observation light) from an object to be observed is condensed on the imaging device via the optical system. The observation light is photoelectrically converted by the imaging device, and an electric signal corresponding to the observation light, or an image signal corresponding to the observed image is generated. The image signal is transmitted as RAW data to a camera control unit (CCU) 11201.
The CCU 11201 is configured of a central processing unit (CPU), a graphics processing unit (GPU), or the like, and totally controls the operations of the endoscope 11100 and a display apparatus 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing for displaying an image based on the image signal, such as development processing (demosaic processing) or the like, on the image signal.
The display apparatus 11202 displays the image based on the image signal subjected to the image processing by the CCU 11201 under control of the CCU 11201.
The light source apparatus 11203 is configured of a light source such as light emitting diode (LED) or the like, and supplies an irradiation light to the endoscope 11100 when shooting a surgical site or the like.
An input apparatus 11204 is an input interface for the endoscopic surgery system 11000. A user can input various items of information or instructions into the endoscopic surgery system 11000 via the input apparatus 11204. For example, the user inputs an instruction or the like to change shooting conditions (such as kind of irradiation light, magnification, and focal distance) of the endoscope 11100.
A processing tool control apparatus 11205 controls to drive the energy treatment tool 11112 for cauterizing or cutting a tissue, sealing a blood vessel, and the like. A pneumoperitoneum apparatus 11206 feeds gas into the body cavity via the pneumoperitoneum tube 11111 to expand the body cavity of the patient 11132 in order to secure the field of view of the endoscope 11100 and to secure a working space of the operator. A recorder 11207 is an apparatus capable of recording various items of information regarding a surgery. A printer 11208 is an apparatus capable of printing various items of information regarding a surgery in various forms such as text, image, or graph.
Additionally, the light source apparatus 11203 for supplying an irradiation light to the endoscope 11100 when shooting a surgical site can be configured of a white light source including an LED, a laser light source, or a combination thereof, for example. In a case where the white light source is configured in a combination of RGB laser light sources, the output intensity and the output timing of each color (each wavelength) can be controlled with high accuracy, and thus the white balance of a shot image can be adjusted in the light source apparatus 11203. Further, in this case, the laser lights from the respective RGB laser light sources are irradiated on an object to be observed in a time division manner, and the imaging device in the camera head 11102 is controlled to be driven in synchronization with the irradiation timings, thereby shooting the images corresponding to RGB in a time division manner. According to the method, a color image can be obtained without a color filter in the imaging device.
Further, the light source apparatus 11203 may be controlled to be driven for changing the intensity of a light to be output at a predetermined time. The imaging device in the camera head 11102 is controlled to be driven in synchronization with the timings to change the intensities of the lights thereby to obtain images in a time division manner, and the images are combined thereby to generate an image with a high dynamic range without blocked-up shadows and blown-out highlights.
Further, the light source apparatus 11203 may be configured to supply a light in a predetermined wavelength band corresponding to special light observation. Under the special light observation, a light in a narrower band than an irradiation light (or white light) during normal observation is irradiated by use of the wavelength dependency of absorption of a light in a body tissue, thereby performing narrow band imaging for shooting a predetermined tissue such as blood vessel in the superficial portion of the mucous membrane at high contrast, for example. Alternatively, under the special light observation, fluorescent observation for obtaining an image by fluorescence caused by irradiating an excitation light may be performed. Under the fluorescent observation, an excitation light can be irradiated on a body tissue thereby to observe fluorescence from the body tissue (autofluorescence observation), a regent such as indocyanine green (ICG) can be locally injected into a body tissue, and an excitation light corresponding to the fluorescent wavelength of the regent can be irradiated on the body tissue thereby to obtain a fluorescent image, for example. The light source apparatus 11203 can be configured to supply a narrowband light and/or excitation light corresponding to the special light observation.
The camera head 11102 has a lens unit 11401, a shooting part 11402, a driving part 11403, a communication part 11404, and a camera head control part 11405. The CCU 11201 has a communication part 11411, an image processing part 11412, and a control part 11413. The camera head 11102 and the CCU 11201 are communicably connected to each other via a transmission cable 11400.
The lens unit 11401 is an optical system provided at the connection part to the lens tube 11101. An observation light taken from the tip of the lens tube 11101 is guided to the camera head 11102, and is incident into the lens unit 11401. The lens unit 11401 is configured in a combination of a plurality of lenses including a zoom lens and a focus lens.
The shooting part 11402 may be configured of one imaging device (or single plate) or may be configured of a plurality of imaging devices (or multiplate). In a case where the shooting part 11402 is configured in multiplate, the image signals corresponding to RGB are generated by the imaging devices, respectively, and are combined thereby to obtain a color image, for example. Alternatively, the shooting part 11402 may have a pair of imaging devices for obtaining right-eye and left-eye image signals for 3 dimensional (D) display. 3D display is performed so that the operator 11131 can more accurately grasp the depth of a body tissue at a surgical site. Additionally, in a case where the shooting part 11402 is configured in multiplate, a plurality of lens units 11401 corresponding to the imaging devices can be provided, respectively.
Further, the shooting part 11402 may not necessarily be provided in the camera head 11102. For example, the shooting part 11402 may be provided immediately behind the objective lens inside the lens tube 11101.
The driving part 11403 is configured of an actuator, and moves the zoom lens and the focus lens in the lens unit 11401 by a predetermined distance along the optical axis under control of the camera head control part 11405. Thereby, the magnification and the focal point of an image shot by the shooting part 11402 can be adjusted as needed.
The communication part 11404 is configured of a communication apparatus for exchanging various items of information with the CCU 11201. The communication part 11404 transmits an image signal obtained from the shooting part 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
Further, the communication part 11404 receives a control signal for controlling to drive the camera head 11102 from the CCU 11201, and supplies it to the camera head control part 11405. The control signal includes information regarding shooting conditions such as information for designating a frame rate of a shot image, information for designating an exposure value on shooting, and/or information for designating the magnification and the focal point of a shot image and the like, for example.
Additionally, the shooting conditions such as frame rate, exposure value, magnification, and focal point may be designated by the user as needed, or may be automatically set by the control part 11413 in the CCU 11201 on the basis of the obtained image signal. In the latter case, the auto exposure (AE) function, the auto focus (AF) function, and the auto white balance (AWB) function are mounted on the endoscope 11100.
The camera head control part 11405 controls to drive the camera head 11102 on the basis of the control signal from the CCU 11201 received via the communication part 11404.
The communication part 11411 is configured of a communication apparatus for exchanging various items of information with the camera head 11102. The communication part 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
Further, the communication part 11411 transmits the control signal for controlling to drive the camera head 11102 to the camera head 11102. The image signal or control signal can be transmitted via electric communication, optical communication, or the like.
The image processing part 11412 performs various image processing on the image signal as RAW data transmitted from the camera head 11102.
The control part 11413 performs various controls for shooting a surgical site or the like by the endoscope 11100 and displaying a shot image obtained by shooting a surgical site or the like. For example, the control part 11413 generates the control signal for controlling to drive the camera head 11102.
Further, the control part 11413 causes the display apparatus 11202 to display a shot image shooting a surgical site or the like therein on the basis of the image signal subjected to the image processing by the image processing part 11412. At this time, the control part 11413 may recognize various objects in the shot image by use of various image recognition technologies. For example, the control part 11413 detects the shapes, colors, and the like of the edges of the objects included in the shot image thereby to recognize a surgical tool such as forceps, a specific living body site, bleeding, mist during the use of the energy treatment tool 11112, and the like. When causing the display apparatus 11202 to display a shot image, the control part 11413 may overlap various items of surgery support information on the image of the surgical site to be displayed by use of the recognition result. The surgery support information is overlapped to be displayed, and is presented to the operator 11131 so that the loads on the operator 11131 can be alleviated and the operator 11131 can accurately perform the operation.
The transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electric signal cable for communication of electric signals, an optical fiber for optical communication, or a composite cable thereof.
Here, wired communication is made by use of the transmission cable 11400 in the illustrated example, but wireless communication may be made between the camera head 11102 and the CCU 11201.
An exemplary endoscopic surgery system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to the camera head 11102 or the shooting part 11402 in the camera head 11102 among the above-described components, for example. Specifically, the imaging device 12 of
Additionally, the endoscopic surgery system has been described herein by way of example, but the technology according to the present disclosure may be additionally applied to a microscopic surgery system and the like, for example.
Further, the technology according to the present disclosure may be realized as an apparatus mounted on any kind of moving object such as vehicle, electric vehicle, hybrid vehicle, motorcycle, bicycle, personal mobility, airplane, drone, ship, or robot, for example.
A vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example illustrated in
The drive system control unit 12010 controls the operations of the apparatuses for the vehicle drive system according to various programs. For example, the drive system control unit 12010 functions as a control apparatus for a driving force generation apparatus such as internal engine or drive motor for generating a driving force of the vehicle, a driving force transmission mechanism for transmitting a driving force to the wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking apparatus for generating a braking force of the vehicle, and the like.
The body system control unit 12020 controls the operations of various apparatuses equipped in the vehicle body according to various programs. For example, the body system control unit 12020 functions as a control apparatus for a keyless entry system, a smart key system, a power window apparatus, or various lights such as head lights, back lights, brake light, directional signals, or fog light. In this case, the body system control unit 12020 can have a radio wave originated from a portable machine as a key, or signals of various switches input. The body system control unit 12020 receives the input of the radio wave or signals, and controls the door lock apparatus, the power window apparatus, the lights, and the like of the vehicle.
The exterior information detection unit 12030 detects the information indicating the exterior of the vehicle mounting the vehicle control system 12000 thereon. For example, the exterior information detection unit 12030 is connected with a shooting part 12031. The exterior information detection unit 12030 causes the shooting part 12031 to shoot an image of the exterior of the vehicle, and receives the shot image. The exterior information detection unit 12030 may perform processing of detecting an object such as person, vehicle, obstacle, road sign, or character on the road, or a distance detection processing on the basis of the received image.
The shooting part 12031 is a light sensor for receiving a light and outputting an electric signal depending on the amount of received light. The shooting part 12031 can output the electric signal as an image, or can output it as distance measurement information. Further, a light received by the shooting part 12031 may be a visible ray or a non-visible ray such as infrared ray.
The interior information detection unit 12040 detects the information indicating the interior of the vehicle. The interior information detection unit 12040 is connected with a driver's state detection part 12041 for detecting a driver's state, for example. The driver's state detection part 12041 includes a camera for shooting the driver, for example, and the interior information detection unit 12040 may calculate a degree of fatigue or a degree of concentration of the driver or may determine whether the driver is asleep at the wheel on the basis of the detection information input from the driver's state detection part 12041.
The microcomputer 12051 can calculate a control target value of the driving force generation apparatus, the steering mechanism, or the braking apparatus on the basis of the information indicating the exterior or interior of the vehicle obtained by the exterior information detection unit 12030 or the interior information detection unit 12040, and can output a control instruction to the drive system control unit 12010. For example, the microcomputer 12051 can perform cooperative control for realizing the advanced driver assistance system (ADAS) functions including collision avoidance or collision alleviation of the vehicle, follow-up traveling based on inter-vehicle distance, traveling at kept vehicle speed, collision alarm of the vehicle, lane deviation alarm of the vehicle, and the like.
Further, the microcomputer 12051 controls the driving force generation apparatus, the steering mechanism, the braking apparatus, or the like on the basis of the information indicating the surrounding of the vehicle obtained by the exterior information detection unit 12030 or the interior information detection unit 12040, thereby performing cooperative control for automatic driving of autonomous traveling irrespective of driver's operation, and the like.
Further, the microcomputer 12051 can output a control instruction to the body system control unit 12020 on the basis of the information indicating the exterior of the vehicle obtained by the exterior information detection unit 12030. For example, the microcomputer 12051 can control the head lights depending on the position of a leading vehicle or an oncoming vehicle detected by the exterior information detection unit 12030, and can perform cooperative control in order to achieve anti-glare such as switching from high beam to low beam.
The audio/image output part 12052 transmits an output signal of at least one of audio or image to an output apparatus capable of visually or aurally notifying information to the passengers in the vehicle or the outside of the vehicle. In the example of
In
The shooting parts 12101, 12102, 12103, 12104, and 12105 are provided at the front nose, the side mirrors, the rear bumper or back door of the vehicle 12100, at the top part of the front shield inside the vehicle, and the like, respectively, for example. The shooting part 12101 provided at the front nose and the shooting part 12105 provided at the top part of the front shield inside the vehicle mainly obtain images in front of the vehicle 12100. The shooting parts 12102 and 12103 provided at the side mirrors mainly obtain images on both sides of the vehicle 12100. The shooting part 12104 provided at the rear bumper or back door mainly obtains an image behind the vehicle 12100. The shooting part 12105 provided at the top part of the front shield inside the vehicle is mainly used to detect a leading vehicle, a pedestrian, an obstacle, a traffic light, a road sign, a traffic lane, or the like.
Additionally,
At least one of the shooting parts 12101 to 12104 may have a function of obtaining distance information. For example, at least one of the shooting parts 12101 to 12104 may be a stereo camera configured of a plurality of imaging devices, or may be an imaging device having pixels for phase difference detection.
For example, the microcomputer 12051 finds a distance to each stereoscopic object in the shooting ranges 12111 to 12114 and a temporal change in the distance (relative speed to the vehicle 12100) on the basis of the distance information obtained from the shooting parts 12101 to 12104, thereby extracting, as a leading vehicle, a stereoscopic object traveling at a predetermined speed (0 km/h or more, for example) substantially in the same direction as the vehicle 12100, which is the closest stereoscopic object to the vehicle 12100 on the road. Further, the microcomputer 12051 can set an inter-vehicle distance to be previously secured behind the leading vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), or the like. Cooperative control for automatic driving of autonomous traveling irrespective of driver's operation, and the like can be performed in this way.
For example, the microcomputer 12051 can classify and extract stereoscopic data regarding stereoscopic objects into two-wheel vehicle, standard-sized vehicle, large-sized vehicle, pedestrian, power pole, and the like on the basis of the distance information obtained from the shooting parts 12101 to 12104, and can use it for automatic obstacle avoidance. For example, the microcomputer 12051 discriminates the obstacles around the vehicle 12100 as obstacles capable of being visually confirmed by the driver of the vehicle 12100 or obstacles difficult to visually confirm. The microcomputer 12051 then determines a collision risk indicating a degree of risk of collision with each obstacle, and outputs an alarm to the driver via the audio speaker 12061 or the display part 12062 or performs forcible deceleration or avoidance steering via the drive system control unit 12010 when there is a collision possibility at a set value of collision risk, thereby performing driving support for collision avoidance.
At least one of the shooting parts 12101 to 12104 may be an infrared camera for detecting an infrared ray. For example, the microcomputer 12051 determines whether a pedestrian is present in the images shot by the shooting parts 12101 to 12104, thereby recognizing the pedestrian. The pedestrian is recognized in a procedure of extracting the characteristic points in the images shot by the shooting parts 12101 to 12104 as infrared cameras and a procedure of performing a pattern matching processing on a series of characteristic points indicating the contour of an object and determining whether the contour of the object is a pedestrian, for example. When the microcomputer 12051 determines that a pedestrian is present in the images shot by the shooting parts 12101 to 12104 and recognizes the pedestrian, the audio/image output part 12052 controls the display part 12062 to overlap a square contour line for emphasis on the recognized pedestrian for display. Further, the audio/image output part 12052 may control the display part 12062 to display an icon or the like indicating a pedestrian at a desired position.
An exemplary vehicle control system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to the shooting part 12031 among the above-described components, for example. Specifically, for example, the shooting apparatus 10 of
Additionally, embodiments of the present technology are not limited to the above embodiments, and can be variously changed without departing from the scope of the present technology.
Additionally, the present technology can take the following configurations, for example.
(1)
An image processing apparatus including:
an image reduction part configured to reduce a multispectral image in which an object is shot by a light dispersed in many wavelength bands, and to generate reduced images for each of the wavelength bands; and
a spectroscopic correction processing part configured to perform spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands generated by the image reduction part.
(2)
The image processing apparatus according to (1),
in which the image reduction part reduces the multispectral image obtained by shooting the object by an imaging device including a metallic thin film filter which is provided closer to a light incident side than a photoelectric conversion device in at least some pixels and which is different in film thickness of a conductive thin film per pixel.
(3)
The image processing apparatus according to (1) or (2),
in which the image reduction part generates the reduced images for each of the wavelength bands from the multispectral image such that pixels in each wavelength band configuring the multispectral image are arranged in the same spatial phase.
(4)
The image processing apparatus according to any of (1) to (3), further including:
a luminance image extraction part configured to extract a single luminance image from the multispectral image,
in which the luminance image and the reduced images for each of the wavelength bands are associated with each other and are stored in a storage.
(5)
The image processing apparatus according to (4), further including:
an image enlargement part configured to enlarge the reduced images for each of the wavelength bands read from the storage to generate enlarged images for each of the wavelength bands; and
a resolution increase processing part configured to increase the resolution of the enlarged images enlarged by the image enlargement part by use of the luminance image corresponding to the reduced images for each of the wavelength bands read from the storage.
(6)
The image processing apparatus according to (5),
in which processing of generating the enlarged images and processing of increasing the resolution of the enlarged images are performed at the same time.
(7)
An image processing method including steps of:
reducing a multispectral image in which an object is shot by a light dispersed in many wavelength bands, and generating reduced images for each of the wavelength bands; and
performing spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands.
(8)
A program for causing a computer to perform an image processing including steps of:
reducing a multispectral image in which an object is shot by a light dispersed in many wavelength bands, and generating reduced images for each of the wavelength bands; and
performing spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands.
(9)
An electronic apparatus including:
an imaging device including a metallic thin film filter which is provided closer to a light incident side than a photoelectric conversion device in at least some pixels and which is different in film thickness of a conductive thin film per pixel;
an image reduction part configured to reduce a multispectral image obtained by shooting an object by a light dispersed in many wavelength bands by the imaging device, and generate reduced images for each of the wavelength bands; and
a spectroscopic correction processing part configured to perform spectroscopic correction processing of correcting a spectroscopic distribution of the reduced images for each of the wavelength bands generated by the image reduction part.
Number | Date | Country | Kind |
---|---|---|---|
2016-241354 | Dec 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/044630 | 12/12/2017 | WO | 00 |