The present invention relates to an imaging apparatus and an endoscope apparatus.
Imaging devices having color filters of primary colors consisting of R (red), G (green), and B (blue) have been widely used for an imaging apparatus in recent years. When a band of the color filter becomes wide, the amount of transmitted light increases and imaging sensitivity increases. For this reason, in a typical imaging device, a method of causing transmittance characteristics of R, G, and B color filters to intentionally overlap is used.
In a phase difference AF or the like, phase difference detection using a parallax between two pupils is performed. For example, in Japanese Unexamined Patent Application, First Publication No. 2013-044806, an imaging apparatus including a pupil division optical system having a first pupil area transmitting R and G light and a second pupil area transmitting G and B light is disclosed. A phase difference is detected on the basis of a positional deviation between an R image and a B image acquired by a color imaging device mounted on this imaging apparatus.
According to a first aspect of the present invention, an imaging apparatus includes a pupil division optical system, an imaging device and a processor. The pupil division optical system includes a first pupil transmitting light of a first wavelength band and a second pupil transmitting light of a second wavelength band different from the first wavelength band. The imaging device is configured to capture an image of light transmitted through the pupil division optical system and a first color filter having a first transmittance characteristic and light transmitted through the pupil division optical system and a second color filter having a second transmittance characteristic partially overlapping the first transmittance characteristic, and output the captured image. The processor is configured to generate at least one of a first monochrome correction image and a second monochrome correction image as a monochrome correction image. The first monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the first transmittance characteristic. The second monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the second transmittance characteristic. The processor is configured to generate point information that represents a point on the monochrome correction image in accordance with an instruction from a user. The processor is configured to generate a mark. The processor is configured to superimpose the mark on the monochrome correction image or a processed image generated by processing the monochrome correction image on the basis of the point information and output the monochrome correction image or the processed image on which the mark is superimposed to a display unit.
According to a second aspect of the present invention, in the first aspect, the processor may be configured to generate the first monochrome correction image and the second monochrome correction image. The processor is configured to select at least one of the first monochrome correction image and the second monochrome correction image and output the selected image as the monochrome correction image.
According to a third aspect of the present invention, in the second aspect, the processor may be configured to select an image having a higher signal-to-noise ratio (SNR) out of the first monochrome correction image and the second monochrome correction image.
According to a fourth aspect of the present invention, in the second aspect, the processor may be configured to select at least one of the first monochrome correction image and the second monochrome correction image in accordance with an instruction from a user.
According to a fifth aspect of the present invention, in the second aspect, the processor is configured to calculate a phase difference between the first monochrome correction image and the second monochrome correction image. The point information may represent a measurement point that is a position at which the phase difference is calculated.
According to a sixth aspect of the present invention, in the second aspect, the processor may be configured to generate a third monochrome correction image and a fourth monochrome correction image. The third monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the first transmittance characteristic. The fourth monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the second transmittance characteristic. The processor may be configured to calculate a phase difference between the third monochrome correction image and the fourth monochrome correction image. The point information may represent a measurement point that is a position at which the phase difference is calculated.
According to a seventh aspect of the present invention, in the second aspect, the processor may be configured to designate at least one mode included in a plurality of modes in accordance with an instruction from a user. The processor may be configured to generate a processed image by performing image processing corresponding to the mode on at least part of the selected monochrome correction image and output the generated processed image to the display unit.
According to an eighth aspect of the present invention, in the seventh aspect, the processor may be configured to generate the processed image by performing at least one of enlargement processing, edge extraction processing, edge enhancement processing, and noise reduction processing on at least part of the monochrome correction image.
According to a ninth aspect of the present invention, in the seventh aspect, the processor may be configured to generate the processed image by performing enlargement processing and at least one of edge extraction processing, edge enhancement processing, and noise reduction processing on at least part of the monochrome correction image.
According to a tenth aspect of the present invention, an imaging apparatus includes a pupil division optical system, an imaging device, a correction unit, a user instruction unit, a mark generation unit, and a superimposition unit. The pupil division optical system includes a first pupil transmitting light of a first wavelength band and a second pupil transmitting light of a second wavelength band different from the first wavelength band. The imaging device is configured to capture an image of light transmitted through the pupil division optical system and a first color filter having a first transmittance characteristic and light transmitted through the pupil division optical system and a second color filter having a second transmittance characteristic partially overlapping the first transmittance characteristic, and output the captured image. The correction unit is configured to output at least one of a first monochrome correction image and a second monochrome correction image as a monochrome correction image. The first monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the first transmittance characteristic. The second monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the second transmittance characteristic. The user instruction unit is configured to output point information that represents a point on the monochrome correction image in accordance with an instruction from a user. The mark generation unit is configured to generate a mark. The superimposition unit is configured to superimpose the mark on the monochrome correction image or a processed image generated by processing the monochrome correction image on the basis of the point information and output the monochrome correction image or the processed image on which the mark is superimposed to a display unit.
According to an eleventh aspect of the present invention, in the tenth aspect, the correction unit may be configured to output the first monochrome correction image and the second monochrome correction image. The imaging apparatus may further include a selection unit configured to select at least one of the first monochrome correction image and the second monochrome correction image output from the correction unit and output the selected image as the selected monochrome correction image.
According to a twelfth aspect of the present invention, in the eleventh aspect, the imaging apparatus may further include a selection instruction unit configured to instruct the selection unit to select at least one of the first monochrome correction image and the second monochrome correction image. The selection unit may be configured to select at least one of the first monochrome correction image and the second monochrome correction image in accordance with an instruction from the selection instruction unit.
According to a thirteenth aspect of the present invention, an endoscope apparatus includes the imaging apparatus according to the first aspect.
When an imaging apparatus disclosed in Japanese Unexamined Patent Application, First Publication No. 2013-044806 captures an image of a subject at a position away from the focusing position, color shift in an image occurs. The imaging apparatus including a pupil division optical system disclosed in Japanese Unexamined Patent Application, First Publication No. 2013-044806 approximates a shape and a centroid position of blur in an R image and a B image to a shape and a centroid position of blur in a G image so as to display an image in which double images due to color shift are suppressed.
In the imaging apparatus disclosed in Japanese Unexamined Patent Application, First Publication No. 2013-044806, correction of an R image and a B image is performed on the basis of a shape of blur in a G image. For this reason, the premise is that a waveform of a G image has no distortion (no double images). However, there are cases in which a waveform of a G image has distortion. Hereinafter, distortion of a waveform of a G image will be described with reference to
There are cases in which a user performs pointing, i.e., designation of a point for an image that has been displayed. For example, in an industrial endoscope apparatus, it is possible to perform measurement on the basis of a measurement point designated by a user and perform inspection of damage and the like on the basis of the measurement result. However, when an image including the above-described double images is displayed, there are issues that it is hard for a user to perform pointing with high accuracy.
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
A schematic configuration of the imaging apparatus 10 will be described. The pupil division optical system 100 includes a first pupil 101 transmitting light of a first wavelength band and a second pupil 102 transmitting light of a second wavelength band different from the first wavelength band. The imaging device 110 captures an image of light transmitted through the pupil division optical system 100 and a first color filter having a first transmittance characteristic, captures an image of light transmitted through the pupil division optical system 100 and a second color filter having a second transmittance characteristic partially overlapping the first transmittance characteristic, and outputs a captured image. The correction unit 130 outputs at least one of a first monochrome correction image and a second monochrome correction image as a monochrome correction image. The first monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the first transmittance characteristic. The second monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the second transmittance characteristic. The user instruction unit 140 outputs point information that represents a point on the monochrome correction image in accordance with an instruction from a user. The mark generation unit 150 generates a mark. The superimposition unit 160 superimposes the mark on the monochrome correction image on the basis of the point information and outputs the monochrome correction image on which the mark is superimposed to the display unit 170. The display unit 170 displays the monochrome correction image on which the mark is superimposed.
A detailed configuration of the information imaging apparatus 10 will be described. The first pupil 101 of the pupil division optical system 100 includes an RG filter transmitting light of wavelengths of R (red) and G (green). The second pupil 102 of the pupil division optical system 100 includes a BG filter transmitting light of wavelengths of B (blue) and G (green).
The imaging device 110 is a photoelectric conversion element such as a charge coupled device (CCD) sensor and a complementary metal oxide semiconductor (CMOS) sensor of the XY-address-scanning type. As a configuration of the imaging device 110, there is a type such as a single-plate-primary-color Bayer array and a three-plate type using three sensors. Hereinafter, an embodiment of the present invention will be described with reference to examples in which a CMOS sensor (500×500 pixels and depth of 10 bits) of the single-plate-primary-color Bayer array is used.
The imaging device 110 includes a plurality of pixels. In addition, the imaging device 110 includes color filters including a first color filter, a second color filter, and a third color filter. The color filters are disposed in each pixel of the imaging device 110. For example, the first color filter is an R filter, the second color filter is a B filter, and the third color filter is a G filter. Light transmitted through the pupil division optical system 100 and the color filters is incident on each pixel of the imaging device 110. Light transmitted through the pupil division optical system 100 contains light transmitted through the first pupil 101 and light transmitted through the second pupil 102. The imaging device 110 acquires and outputs a captured image including a pixel value of a first pixel on which light transmitted through the first color filter is incident, a pixel value of a second pixel on which light transmitted through the second color filter is incident, and a pixel value of a third pixel on which light transmitted through the third color filter is incident.
Analog front end (AFE) processing such as correlated double sampling (CDS), analog gain control (AGC), and analog-to-digital converter (ADC) is performed by the imaging device 110 on an analog captured image signal generated through photoelectric conversion in the CMOS sensor. A circuit outside the imaging device 110 may perform AFE processing. A captured image (Bayer image) acquired by the imaging device 110 is transferred to the demosaic processing unit 120.
In the demosaic processing unit 120, a Bayer image is converted to an RGB image and a color image is generated.
The demosaic processing unit 120 performs black-level correction (optical-black (OB) subtraction) on pixel values of a Bayer image. In addition, the demosaic processing unit 120 generates pixel values of adjacent pixels by copying pixel values of pixels. In this way, an RGB image having pixel values of each color in all the pixels is generated. For example, after the demosaic processing unit 120 performs OB subtraction on an R pixel value (R_00), the demosaic processing unit 120 copies a pixel value (R_00−OB). In this way, R pixel values in Gr, Gb, and B pixels adjacent to an R pixel are interpolated.
Similarly, after the demosaic processing unit 120 performs OB subtraction on a Gr pixel value (Gr_01), the demosaic processing unit 120 copies a pixel value (Gr_01−OB). In addition, after the demosaic processing unit 120 performs OB subtraction on a Gb pixel value (Gb_10), the demosaic processing unit 120 copies a pixel value (Gb_10−OB). In this way, G pixel values in an R pixel adjacent to a Gr pixel and in a B pixel adjacent to a Gb pixel are interpolated.
Similarly, after the demosaic processing unit 120 performs OB subtraction on a B pixel value (B_11), the demosaic processing unit 120 copies a pixel value (B_11−OB). In this way, B pixel values in R, Gr, and Gb pixels adjacent to a B pixel are interpolated.
The demosaic processing unit 120 generates a color image (RGB image) including an R image, a G image, and a B image through the above-described processing. A specific method of demosaic processing is not limited to the above-described method. Filtering processing may be performed on a generated RGB image. An RGB image generated by the demosaic processing unit 120 is transferred to the correction unit 130.
Details of processing performed by the correction unit 130 will be described.
An area between the line fR and the line fB in an area of longer wavelengths than the wavelength λC in the spectral characteristics shown by the line fR is defined as an area φR. An area of longer wavelengths than the wavelength λC in the spectral characteristics shown by the line fB is defined as an area φRG. An area between the line fB and the line fR in an area of shorter wavelengths than the wavelength λC in the spectral characteristics shown by the line fB is defined as an area φB. An area of shorter wavelengths than the wavelength λC in the spectral characteristics shown by the line fR is defined as an area φGB.
In a method in which a phase difference is acquired on the basis of an R image and a B image, for example, the difference between a phase of R (red) information and a phase of B (blue) information is acquired. R information is acquired through photoelectric conversion in R pixels of the imaging device 110 in which R filters are disposed. The R information includes information of the area φB, the area φRG, and the area φGB in
On the other hand, B information is acquired through photoelectric conversion in B pixels of the imaging device 110 in which B filters are disposed. The B information includes information of the area φB, the area φRG, and the area φGB in
Correction is performed through which the information of the area φGB including blue information is reduced in red information and the information of the area φRG including red information is reduced in blue information. The correction unit 130 performs correction processing on the R image and the B image. In other words, the correction unit 130 reduces the information of the area φGB in red information and reduces the information of the area φRG in blue information.
R′=R−α×G (1)
B′=B−β×G (2)
In Expression (1), R is red information before the correction processing is performed and R′ is red information after the correction processing is performed. In Expression (2), B is blue information before the correction processing is performed and B′ is blue information after the correction processing is performed. In this example, α and β are larger than 0 and smaller than 1. α and β are set in accordance with the spectral characteristics of the imaging device 110. In a case where the imaging apparatus 10 includes a light source for illumination, α and β are set in accordance with the spectral characteristics of the imaging device 110 and spectral characteristics of the light source. For example, α and β are stored in a memory not shown.
A value that is based on components overlapping between the spectral characteristics of the R filter and the spectral characteristics of the B filter is corrected through the operation shown in Expression (1) and Expression (2). The correction unit 130 generates an image (monochrome correction image) corrected as described above. The correction unit 130 outputs the monochrome correction image by outputting any one of a generated R′ image and a generated B′ image. For example, the correction unit 130 outputs the R′ image. In the first embodiment, any one of the R′ image and the B′ image is output to the display unit 170. The correction unit 130 may generate the R′ image and the B′ image and output only any one of the generated R′ image and the generated B′ image. Alternatively, the correction unit 130 may generate only predetermined one of the R′ image and the B′ image.
The superimposition unit 160 outputs the monochrome correction image output from the correction unit 130 to the display unit 170. The display unit 170 displays the monochrome correction image output from the superimposition unit 160.
The user instruction unit 140 is a user interface such as a button, a switch, a key, and a mouse. The user instruction unit 140 and the display unit 170 may be constituted as a touch panel. A user performs touch by a finger, click by a mouse, or the like for a position of interest on the monochrome correction image displayed on the display unit 170. In this way, a user performs pointing for the monochrome correction image through the user instruction unit 140. The user instruction unit 140 outputs point information of the position instructed by a user to the mark generation unit 150. For example, the point information is coordinate information like (x, y)=(200, 230). For example, a user performs pointing in order to mark a subject seen in the monochrome correction image. In a case where the imaging apparatus 10 is constituted as an endoscope apparatus, a user performs pointing in order to mark damage or the like seen in the monochrome correction image.
The mark generation unit 150 generates graphic data of a mark. The mark has an arbitrary shape and an arbitrary color. A user may designate a shape and a color of the mark. The mark generation unit 150 outputs the generated mark and the point information output from the user instruction unit 140 to the superimposition unit 160.
The superimposition unit 160 superimposes the mark on the monochrome correction image output from the correction unit 130. At this time, the superimposition unit 160 superimposes the mark on a position represented by the point information in the monochrome correction image. In this way, the mark is superimposed on a position at which a user has performed pointing. The monochrome correction image on which the mark has been superimposed is output to the display unit 170. The display unit 170 displays the monochrome correction image on which the mark has been superimposed. A user can confirm the position designated by the user in the monochrome correction image.
The point information may be directly output from the user instruction unit 140 to the superimposition unit 160. The mark generation unit 150 may generate an image having the same size as that of the monochrome correction image and on which the mark has been superimposed at a position represented by the point information. The image generated by the mark generation unit 150 is an image generated by superimposing the mark on a transparent image. The superimposition unit 160 may generate an image by overlapping the monochrome correction image output from the correction unit 130 and the image output from the mark generation unit 150.
High-quality image processing, i.e., γ correction, scaling processing, edge enhancement, and low-pass filtering processing may be performed on the monochrome correction image (R′ image) output from the correction unit 130. In scaling processing, bi-cubic, Nearest neighbor, and the like are used. In low-pass filtering processing, folding distortion (aliasing) is corrected. The correction unit 130 may perform these pieces of processing on the monochrome correction image. In other words, the correction unit 130 may generate a processed image by processing the monochrome correction image. Alternatively, the imaging apparatus 10 may include an image processing unit that performs these pieces of processing on the monochrome correction image. The superimposition unit 160 may output the processed image to the display unit 170. In addition, the superimposition unit 160 may superimpose the mark on a processed image generated by processing the monochrome correction image on the basis of the point information and output the processed image on which the mark has been superimposed to the display unit 170. The display unit 170 may display the processed image and the monochrome correction image on which the mark has been superimposed.
In a case where scaling processing is performed on the monochrome correction image, scaling information is notified to the mark generation unit 150 in order to match a position designated by a user and a position on which the mark is superimposed. For example, in a monochrome correction image (without scaling) having the size of 500×500, when a user designates the position of (x, y)=(200, 230) of the monochrome correction image, it is necessary to generate the mark at the position of (x, y)=(200, 230). On the other hand, when scaling is performed, it is necessary to convert a position designated by a user in accordance with the scaling. For example, when a monochrome correction image having the size of 500×500 is enlarged to have twice the size (1000×1000) of that, the coordinates of (x, y)=(200, 230) correspond to the position of (x, y)=(400, 460) in the processed image. For this reason, it is necessary to generate the mark at the coordinates.
The demosaic processing unit 120, the correction unit 130, the mark generation unit 150, and the superimposition unit 160 may be constituted by an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a microprocessor, and the like. For example, the demosaic processing unit 120, the correction unit 130, the mark generation unit 150, and the superimposition unit 160 may be constituted by an ASIC and an embedded processor. The demosaic processing unit 120, the correction unit 130, the mark generation unit 150, and the superimposition unit 160 may be constituted by hardware, software, firmware, or combinations thereof other than the above.
The display unit 170 is a transparent type liquid crystal display (LCD) requiring backlight, a self-light-emitting type electro luminescence (EL) element (organic EL), and the like. For example, the display unit 170 is constituted as a transparent type LCD and includes a driving unit necessary for LCD driving. The driving unit generates a driving signal and drives an LCD by using the driving signal.
The imaging apparatus 10 may be an endoscope apparatus. In an industrial endoscope, the pupil division optical system 100 and the imaging device 110 are disposed at the distal end of an insertion unit that is to be inserted into the inside of an object for observation and measurement.
The imaging apparatus 10 according to the first embodiment includes the correction unit 130 and thus can suppress double images due to color shift of an image. In addition, since a monochrome correction image is displayed, visibility of an image can be improved. Even when a user observes an image in a method in which a phase difference is acquired on the basis of an R image and a B image, the user can observe an image in which double images due to color shift are suppressed and visibility is improved.
A user can observe a monochrome correction image or a processed image displayed on the display unit 170 and perform pointing on the image. Since the image in which double images due to color shift are suppressed is displayed, a user can easily perform pointing. In other words, a user can perform pointing with higher accuracy.
Since the display unit 170 displays a monochrome correction image, the amount of information output to the display unit 170 is reduced. For this reason, power consumption of the display unit 170 can be reduced.
The imaging apparatus 10a does not include the display unit 170. The display unit 170 is constituted independently of the imaging apparatus 10a. A monochrome correction image output from the correction unit 130 may be output to the display unit 170 via a communicator. For example, the communicator performs wired or wireless communication with the display unit 170.
In terms of points other than the above, the configuration shown in
The imaging apparatus 10a according to the second embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment. Since the display unit 170 is independent of the imaging apparatus 10a, the imaging apparatus 10a can be miniaturized. In addition, by transferring a monochrome correction image, the frame rate when an image is transferred to the display unit 170 increases and the bit rate is reduced compared to a color image.
The imaging apparatus 10b includes a selection unit 180 in addition to the configuration of the imaging apparatus 10 shown in
In terms of points other than the above, the configuration shown in
The imaging apparatus 10b according to the third embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment.
The imaging apparatus 10c includes a selection instruction unit 190 in addition to the configuration of the imaging apparatus 10b shown in
The selection instruction unit 190 instructs the selection unit 180 to select an image having a higher signal-to-noise ratio (SNR) out of the first monochrome correction image and the second monochrome correction image. For example, the selection instruction unit 190 instructs the selection unit 180 to select one of the first monochrome correction image and the second monochrome correction image in accordance with a result of analyzing the first monochrome correction image and the second monochrome correction image. In an example described below, the selection instruction unit 190 instructs the selection unit 180 to select one of the first monochrome correction image and the second monochrome correction image in accordance with a histogram of the first monochrome correction image and the second monochrome correction image. The selection instruction unit 190 is constituted by an ASIC, an FPGA, a microprocessor, and the like.
In terms of points other than the above, the configuration shown in
Details of processing in step S100 will be described. The selection instruction unit 190 generates a histogram of pixel values of pixels in the first monochrome correction image and the second monochrome correction image.
In this example, the selection instruction unit 190 generates a histogram of pixel values of a plurality of R pixels and a histogram of pixel values of a plurality of B pixels. The selection instruction unit 190 instructs the selection unit 180 to select a monochrome correction image corresponding to pixels with higher frequencies of larger pixel values out of R pixels and B pixels. The selection instruction unit 190 may use a captured image, i.e., a Bayer image instead of a first monochrome correction image and a second monochrome correction image. For example, the selection instruction unit 190 generates a histogram of pixel values of a plurality of R pixels in a Bayer image and a histogram of pixel values of a plurality of B pixels in the Bayer image. The selection instruction unit 190 performs processing similar to the above on the basis of each of the histograms. In addition, the display unit 170 may be constituted independently of the imaging apparatus 10c.
The imaging apparatus 10c according to the fourth embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment.
The selection instruction unit 190 instructs the selection unit 180 to select an image having a higher SNR out of a first monochrome correction image and a second monochrome correction image. Since a monochrome correction image having a higher SNR is displayed, a user can perform pointing more easily.
The selection instruction unit 190 instructs the selection unit 180 to select at least one of a first monochrome correction image and a second monochrome correction image in accordance with an instruction from a user. The user instruction unit 140 accepts an instruction from a user. A user inputs an instruction for selecting at least one of a first monochrome correction image and a second monochrome correction image through the user instruction unit 140. The user instruction unit 140 outputs information of an image instructed by a user out of a first monochrome correction image and a second monochrome correction image to the selection instruction unit 190. The selection instruction unit 190 instructs the selection unit 180 to select the image represented by the information output from the user instruction unit 140.
In terms of points other than the above, the configuration shown in
The display unit 170 may be constituted independently of the imaging apparatus 10d.
The imaging apparatus 10d according to the fifth embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment.
The selection instruction unit 190 instructs the selection unit 180 to select an image instructed by a user out of a first monochrome correction image and a second monochrome correction image. For this reason, a user can perform pointing for an image that the user favors.
The imaging apparatus 10e includes a measurement unit 200 in addition to the configuration of the imaging apparatus 10d shown in
The measurement unit 200 calculates a distance of a subject on the basis of a phase difference. For example, when one arbitrary point on an image is designated by a user, the measurement unit 200 performs measurement of depth. When two arbitrary points on an image are designated by a user, the measurement unit 200 can measure the distance between the two points. The measurement unit 200 outputs a measurement result as character information of a measurement value to the superimposition unit 160. The measurement unit 200 is constituted by an ASIC, an FPGA, a microprocessor, and the like.
The superimposition unit 160 superimposes the character information of the measurement value on a selected monochrome correction image and outputs the selected monochrome correction image on which the character information of the measurement value has been superimposed to the display unit 170. The display unit 170 displays the selected monochrome correction image on which the character information of the measurement value has been superimposed. For this reason, a user can confirm a measurement result.
In terms of points other than the above, the configuration shown in
The display unit 170 may be constituted independently of the imaging apparatus 10e. The selection instruction unit 190 may instruct the selection unit 180 to select an image having a higher SNR out of a first monochrome correction image and a second monochrome correction image as with the fourth embodiment.
The imaging apparatus 10e according to the sixth embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment. A user can designate a measurement point with higher accuracy for an image whose visibility has been improved.
In the imaging apparatus 10f, the measurement unit 200 in the imaging apparatus 10e shown in
In terms of points other than the above, the configuration shown in
A Bayer image output from the imaging device 110 is input to the second demosaic processing unit 220. The second demosaic processing unit 220 generates pixel values of adjacent pixels by copying pixel values of pixels. In this way, an RGB image having pixel values of each color in all the pixels is generated. The RGB image includes an R image, a G image, and a B image. The second demosaic processing unit 220 in the seventh embodiment does not perform OB subtraction, but may perform OB subtraction. In a case where the second demosaic processing unit 220 performs OB subtraction, an OB subtraction value may be different from the OB subtraction value used by the demosaic processing unit 120. The second demosaic processing unit 220 outputs the generated RGB image to the second correction unit 230.
The second correction unit 230 is disposed independently of the correction unit 130. The second correction unit 230 generates a third monochrome correction image and a fourth monochrome correction image. The third monochrome correction image is an image generated by correcting a value that is based on components overlapping between a first transmittance characteristic and a second transmittance characteristic for a captured image having components that are based on the first transmittance characteristic. The fourth monochrome correction image is an image generated by correcting a value that is based on components overlapping between the first transmittance characteristic and the second transmittance characteristic for the captured image having components that are based on the second transmittance characteristic. The second correction unit 230 outputs the generated third monochrome correction image and the generated fourth monochrome correction image to the measurement unit 200. The measurement unit 200 calculates a phase difference between the third monochrome correction image and the fourth monochrome correction image.
Specifically, the second correction unit 230 performs correction processing on the R image and the B image. The correction processing performed by the second correction unit 230 is similar to the correction processing performed by the correction unit 130. The second correction unit 230 reduces information of the area φGB in
The measurement unit 200 is constituted similarly to the measurement unit 200 in the imaging apparatus 10e shown in
The display unit 170 may be constituted independently of the imaging apparatus 10f. The selection instruction unit 190 may instruct the selection unit 180 to select an image having a higher SNR out of a first monochrome correction image and a second monochrome correction image as with the fourth embodiment.
The imaging apparatus 10f according to the seventh embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment. A user can designate a measurement point with higher accuracy for an image whose visibility has been improved.
The second demosaic processing unit 220 sets an OB subtraction value (zero in the above-described example) in accordance with measurement processing performed by the measurement unit 200. For this reason, OB subtraction suitable for measurement can be performed and measurement accuracy is improved. In addition, the demosaic processing unit 120 sets an OB subtraction value in accordance with a black level. For this reason, a suitable black level can be set and image quality is improved.
The imaging apparatus 10g includes a processed image generation unit 240 in addition to the configuration of the imaging apparatus 10e shown in
The processed image generation unit 240 constitutes an image processing unit. The processed image generation unit 240 is constituted by an ASIC, an FPGA, a microprocessor, and the like. The processed image generation unit 240 generates a processed image by performing at least one of enlargement processing, edge extraction processing, edge enhancement processing, and noise reduction processing on at least part of the monochrome correction image output from the selection unit 180. The processed image generation unit 240 may generate a processed image by performing enlargement processing and at least one of edge extraction processing, edge enhancement processing, and noise reduction processing on at least part of the monochrome correction image output from the selection unit 180.
The superimposition unit 160 superimposes a processed image on the selected monochrome correction image if necessary and outputs the selected monochrome correction image on which the processed image is superimposed to the display unit 170. The processed image may be directly output from the processed image generation unit 240 to the display unit 170.
In terms of points other than the above, the configuration shown in
For example, the seven image processing methods shown in
In
The display unit 170 may be constituted independently of the imaging apparatus 10g. The selection instruction unit 190 may instruct the selection unit 180 to select an image having a higher SNR out of a first monochrome correction image and a second monochrome correction image as with the fourth embodiment.
The imaging apparatus 10g according to the eighth embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment. Since a processed image is displayed, a user can designate a measurement point with higher accuracy.
The selection unit 180 outputs an image selected as a selected monochrome correction image out of a first monochrome correction image and a second monochrome correction image to the processed image generation unit 240. In addition, the selection unit 180 outputs an image not selected as the selected monochrome correction image out of the first monochrome correction image and the second monochrome correction image to the superimposition unit 160. When the selection unit 180 selects the first monochrome correction image as the selected monochrome correction image, the second monochrome correction image is output from the selection unit 180 to the superimposition unit 160. When the selection unit 180 selects the second monochrome correction image as the selected monochrome correction image, the first monochrome correction image is output from the selection unit 180 to the superimposition unit 160.
The superimposition unit 160 superimposes a processed image on the selected monochrome correction image. In addition, the superimposition unit 160 generates an image in which the selected monochrome correction image on which the processed image is superimposed and the monochrome correction image output from the selection unit 180 are arranged, and outputs the generated image to the display unit 170. The display unit 170 arranges and displays the selected monochrome correction image on which the processed image is superimposed and the monochrome correction image.
In terms of points other than the above, the configuration shown in
The display unit 170 may be constituted independently of the imaging apparatus 10h. The selection instruction unit 190 may instruct the selection unit 180 to select an image having a higher SNR out of a first monochrome correction image and a second monochrome correction image as with the fourth embodiment. The processed image generation unit 240 may perform image processing on a selected monochrome correction image and an image not selected as the selected monochrome correction image by the selection unit 180.
The imaging apparatus 10h according to the ninth embodiment can generate an image in which double images due to color shift are suppressed, visibility is improved, and pointing thereon is easier as with the imaging apparatus 10 according to the first embodiment. Since two monochrome correction images are displayed, a user can confirm a result of designating a measurement point.
While preferred embodiments of the invention have been described and shown above, it should be understood that these are examples of the invention and are not to be considered as limiting. Additions, omissions, substitutions, and other modifications can be made without departing from the spirit or scope of the present invention. Accordingly, the invention is not to be considered as being limited by the foregoing description, and is only limited by the scope of the appended claims.
The present application is a continuation application based on International Patent Application No. PCT/JP2017/015706 filed on Apr. 19, 2017, the content of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2017/015706 | Apr 2017 | US |
Child | 16599223 | US |