The present disclosure relates to an examination apparatus examining both eyes of a subject.
Japanese Unexamined Patent Application Publication (Translation of PCT Application) No. 2018-509203 discusses an examination apparatus that examines a fixation state of a subject by projecting projection light to retinas of both eyes of the subject, and guiding reflected light from the retinas to different imaging devices by an optical system. In the examination apparatus discussed in Japanese Unexamined Patent Application Publication (Translation of PCT Application) No. 2018-509203, it is necessary to arrange two imaging lenses for guiding the reflected light from the retinas to the two imaging devices. Therefore, it is difficult to simplify the optical system.
To simplify the optical system, it is considered that an imaging device in which a plurality of photoelectric conversion units is provided in one pixel discussed in Japanese Patent Application Laid-Open No. 2009-122524 is applied to the examination apparatus examining both eyes of the subject.
However, in a case where the imaging device discussed in Japanese Patent Application Laid-Open No. 2009-122524 is applied to the examination apparatus, an error may occur on a photoelectrically-converted signal due to crosstalk of the signal caused by a neighboring photoelectric conversion unit.
The present disclosure is generally directed to, in an examination apparatus detecting light reflected from both eyes, reducing influence of errors generated by crosstalk of signals obtained by an imaging device.
According to an aspect of the present disclosure, an examination apparatus detecting first reflected light from a first eye of a subject and second reflected light from a second eye of the subject includes an imaging device and a processing unit. The imaging device includes pixels each including a first photoelectric conversion unit and a second photoelectric conversion unit.
The processing unit includes a unit configured to acquire a first signal based on the first reflected light entering the first photoelectric conversion units and a second signal based on the second reflected light entering the second photoelectric conversion units. The processing unit further includes a unit configured to acquire crosstalk amounts of the first and second signals based on information about positions of the first and second eyes, and a unit configured to generate correction signals based on the first and second signals and amounts of the crosstalk.
Further features of the present disclosure will become apparent from the following description of embodiments with reference to the attached drawings.
Some preferred embodiments of the present disclosure will be described below with reference to drawings. The drawings may be drawn at scales different from actual scales, for convenience. In the drawings, the same members are denoted by the same reference numerals, and repetitive description is omitted. The following embodiments do not limit the disclosure according to the claims. Although a plurality of features is described in the embodiments, all of the plurality of features are not necessarily essential for the disclosure, and the plurality of features may be optionally combined.
An examination apparatus OS1 according to a first embodiment will be described with reference to
The illumination unit 101 includes a first light source and an illumination optical system, and irradiates eyes of the subject (examinee) with light from the first light source through the irradiation optical system. The light source of the illumination unit 101 according to the present embodiment includes, for example, a display member such as a liquid crystal display and an organic electroluminescent (EL) display, and one or a plurality of point light sources such as laser diodes.
The illumination unit 101 according to the present embodiment desirably satisfies the following conditional inequality (1) in order to realize an examination with high accuracy and to reduce loads on the eyes of the subject:
where λ [nm] is a wavelength of the light emitted from the illumination unit 101.
When the wavelength λ becomes greater than an upper limit of the conditional inequality (1), an absorptance of light by water inside a living body is enhanced, and a light quantity of reflected light from the retinas may be reduced. When the wavelength A becomes lower than a lower limit of the conditional inequality (1), an absorptance of light by hemoglobin in the living body is increased, and the light quantity of reflected light from the retinas may be reduced.
Further, the illumination unit 101 preferably satisfies the following conditional inequality (1a), and more preferably satisfies the conditional inequality (1b):
The optical system 102 guides the reflected light from the retinas of the eyes of the subject to the imaging device 103. An optical device relating to polarization is preferably used for the optical system 102. When the optical device relating to polarization is used for the optical system 102, information about birefringence of the retinas can be acquired. The optical system 102 can simultaneously guide the reflected light (first reflected light RY1 and second reflected light RY2) from right and left eyes of the subject to the imaging device 103. An optical device may be shared between the optical system 102 and the illumination optical system, as necessary. Such a configuration makes it possible to downsize the examination apparatus.
The imaging device 103 includes a plurality of pixels IP, and receives the first reflected light RY1 and the second reflected light RY2 guided through the optical system 102. Each of the pixels IP includes a microlens ML, a first photoelectric conversion unit PD1, and a second photoelectric conversion unit PD2.
The first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2 respectively convert the guided first reflected light RY1 and the guided second reflected light RY2 into electric signals (signals). The signals photoelectrically converted by the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2 are signals transmitting at least one of image information, luminance information, and information on polarization states. An on-chip microlens is preferably used for the microlens ML. The imaging device 103 according to the present embodiment is a photoelectric conversion device such as a charge-coupled device (CCD) sensor and a complementary metal-oxide semiconductor (CMOS) sensor.
The measurement unit 104 acquires information about a distance to the subject, as necessary. The information about the distance to the subject is information about, for example, a distance from the examination apparatus OS1 to the subject or a distance from a light receiving surface of the imaging device 103 to the subject. Alternatively, as the distance to the subject, a distance from the examination apparatus OS1 to a face or both eyes of the subject may be measured. The measurement unit 104 can acquire the information about the distance to the subject by using a known method. Examples of the method include a method based on a plurality of images including parallax, and a method of acquiring reflection (backscattering) of a laser beam.
The examination apparatus OS1 preferably includes a video image display unit that can present the subject to a user of the examination apparatus OS1. A video image display device used for the video image display unit is, for example, a liquid crystal display or a projector. The user may perform examination and the like while checking an image under processing via the video image display device. The video image display device can display video image information MT described below. Such a configuration enables the user of the examination apparatus OS1 to properly align a relative position of the examination apparatus OS1 and the subject. To temporally continuously display the subject, a moving image may be captured by an imaging unit.
The processing unit 105 includes a first acquisition unit 105a, a second acquisition unit 105b, a third acquisition unit 105c, a processing unit 105d, and an estimation unit 105e. The processing unit 105 generates correction signals based on the signals obtained by photoelectric conversion and crosstalk amounts of the signals, and measures a fixation state of the subject based on the correction signals. The processing unit 105 performs a step of acquiring signals based on the first reflected light RY1 entering the first photoelectric conversion units PD1 and the second reflected light RY2 entering the second photoelectric conversion units PD2, and a step of acquiring the crosstalk amounts of the signals based on information about positions of first and second eyes. The processing unit 105 further performs a step of generating the correction signals based on the signals and the crosstalk amounts. Functions as the processing unit 105 can be implemented by one or more processors such as a central processing unit (CPU) inside the examination apparatus OS1. Details of the first acquisition unit 105a, the second acquisition unit 105b, the third acquisition unit 105c, the processing unit 105d, and the estimation unit 105e will be described below.
Relationship of the optical system 102 and the imaging device 103 with the eyes of the subject will be described with reference to
The examination apparatus OS1 includes the optical system 102 and the imaging device 103, and examines the fixation state of the subject. In
In
A solid line in
An intermediate image IF is an image of retinas formed by the first eye EY1 and the second eye EY2 of the subject. At this time, the first retina RE1 and the second retina RE2 are conjugate with the light receiving surface of the imaging device 103. The first reflected light RY1 is condensed by a first crystalline lens PP1 corresponding to the first eye EY1 to form the intermediate image IF, and passes through a first pupil AP1 in a first aperture HO1 of the diaphragm AP to form an image on the light receiving surface of the imaging device 103. Likewise, the second reflected light RY2 is condensed by a second crystalline lens PP2 corresponding to the second eye EY2 to form the intermediate image IF, and passes through a second pupil AP2 in a second aperture HO2 of the diaphragm AP to form an image on the light receiving surface of the imaging device 103.
In the present embodiment, a light flux of the first reflected light RY1 and a light flux of the second reflected light RY2 do not overlap each other when passing through the pupils, and the first pupil AP1 and the second pupil AP2 are present in regions different from each other. The first crystalline lens PP1 (or pupil of first eye EY1) and the second crystalline lens PP2 (or pupil of second eye EY2) are respectively conjugate with the first pupil AP1 and the second pupil AP2. The light flux of the first reflected light RY1 and the light flux of the second reflected light RY2 may partially overlap each other, and the region of the first pupil AP1 and the region of the second pupil AP2 may partially overlap each other. At this time, the diaphragm AP may include one aperture. Further, the optical system 102 can simultaneously guide the first reflected light RY1 and the second reflected light RY2 to the corresponding photoelectric conversion units. In other words, the examination apparatus OS1 according to the present embodiment can simultaneously acquire information on both eyes of the subject.
In
The optical system 102 condenses the first reflected light RY1 and the second reflected light RY2 to the same (common) microlens ML. At this time, the first reflected light RY1 and the second reflected light RY2 enter the microlens ML at incident angles different from each other because the first reflected light RY1 and the second reflected light RY2 pass through the pupils different from each other. More specifically, relative to a normal (alternate long and short dash line) of the imaging surface of the imaging device 103, one of the first reflected light RY1 and the second reflected light RY2 enters the microlens ML from above, and the other reflected light enters the microlens ML from below.
A cross-section including an optical axis of the microlens ML may be regarded as the normal. In other words, in the present embodiment, signs of the incident angles of the first reflected light RY1 and the second reflected light RY2 to the microlens are different from each other.
The microlens ML condenses (causes incidence of) the first reflected light RY1 from the first pupil AP1 to the first photoelectric conversion unit PD1. Likewise, the microlens ML condenses (causes incidence of) the second reflected light RY2 from the second pupil AP2 to the second photoelectric conversion unit PD2. In other words, the optical system 102 causes the first reflected light RY1 and the second reflected light RY2 to form images at the same conjugate position (microlens ML) for the first retina RE1 and the second retina RE2. Further, the microlens ML can guide the first reflected light RY1 to the first photoelectric conversion unit PD1, and guide the second reflected light RY2 to the second photoelectric conversion unit PD2.
The optical system 102 according to the present embodiment condenses the first reflected light RY1 and the second reflected light RY2 having passed through the different pupils to one imaging device 103. Such a configuration makes it possible to guide the first reflected light RY1 and the second reflected light RY2 to the imaging device 103 by one imaging lens. This makes it possible to simplify the optical system.
In a case where an axis passing through a center of a lens configuring the optical system 102 is an optical axis, the examination apparatus OS1 can be installed such that the optical axis and a line segment connecting centers of the first eye EY1 and the second eye EY2 are positioned on the same plane. This makes it possible to enhance accuracy in measurement of the fixation state. At this time, the video image display device preferably presents a relative position of the examination apparatus OS1 and the subject to the user based on image information. In the examination, a line segment connecting the centers of the first and second eyes (or pupils or crystalline lenses), a line segment connecting centers of the first and second pupils, and a line segment connecting centers of the first and second photoelectric conversion units of one pixel are preferably positioned on the same plane.
The optical axis of the microlens ML positioned on a peripheral portion of the imaging device 103 may be made eccentric from a middle position between the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2. The optical axis of the microlens ML may be made eccentric to a center side in parallel, from the middle position between the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2. An eccentric amount may be varied depending on a region of the imaging device 103.
A light ray that enters a center of the microlens ML of an optional pixel IP and is guided between the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2 of the optional pixel IP is regarded as a reference light ray of the optional pixel IP. The reference light ray may be different depending on a position of the pixel IP in the imaging device 103. At this time, an incident angle of the reference light ray entering the microlens ML of the pixel IP positioned on the peripheral portion of the imaging device 103 is preferably greater (more inclined) than an incident angle of the reference light ray entering the microlens ML of the pixel IP positioned at the center of the imaging device 103. The reference light ray entering the microlens ML positioned at the center of the imaging device 103 is parallel to the normal of the imaging surface of the imaging device 103.
In
The light rays RY11, RY12, and RY13 are condensed to the microlens ML through the optical device LE2. Incident angles of the light rays RY11, RY12, and RY13 to the microlens ML are denoted by θ11, θ12, and θ13.
In
Characteristics of the imaging device 103 according to the present embodiment will now be described.
In the present embodiment, the signs of the incident angles of the first reflected light RY1 and the second reflected light RY2 to the microlens ML are different from each other. However, in
The crosstalk in the present embodiment includes at least one of optical crosstalk generated by stray light caused by reflection and scattering in the microlens, a wiring layer, and the like, and electric crosstalk generated by movement of charges to adjacent another photoelectric conversion unit.
The signal intensities of the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2 are respectively denoted by PD1S and PD2S, and the crosstalk amounts generated in the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2 are respectively denoted by CT1 and CT2. In addition, when corrected signal intensities (correction signal intensities) corresponding to the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2 are respectively denoted by PD1S' and PD2S′, relationship can be represented by expressions (2a) and (2b):
As described above, in the present embodiment, by using the above-described expressions (2a) and (2b), correction signals under less influence of the crosstalk generated between the photoelectric conversion units is suppressed can be acquired based on the signal intensities of the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2, and the crosstalk amounts.
In the relationship of the signal intensity to the incident angle to the imaging device 103 as illustrated in
Therefore, when the light ray angle (incident angle) of the first reflected light RY1 and the second reflected light RY2 condensed to the imaging device 103 via the optical system 102 to the optical axis is denoted by θ[°], the incident angle of the light ray entering the imaging device 103 according to the present embodiment preferably satisfies the following conditional inequality (3):
In a case where the angle θ becomes greater than an upper limit of the conditional inequality (3), the signal intensity obtained by each of the photoelectric conversion units is reduced. Therefore, a signal-to-noise ratio (SN ratio) of the signal acquired by each of the photoelectric conversion units is reduced, which may deteriorate examination accuracy.
Further, the incident angle of the light ray entering the imaging device 103 preferably satisfies the following conditional inequality (3a), and more preferably satisfies the conditional inequality (3b):
Note that a relative inclination to the reference light ray varied for each of the microlens ML of the imaging device 103 may be regarded as the incident angle.
The incident angle varies based on an image-side F-number of the optical system. The image-side F-number can be adjusted by the diaphragm AP of the optical system 102 and by arranging a diaphragm (first diaphragm) for shielding a most off-axis light flux (light flux reaching most off-axis image height) separately from the diaphragm AP. It is sufficient for the first diaphragm to limit an off-axis light flux (to shield part of off-axis light flux).
A method of estimating the positions of the pupils will be described with reference to
A middle point CNT in
As illustrated in
As illustrated in
Further, as illustrated in
In
Table 1 illustrates an example of a conversion table for converting the information about the positions of the first and second eyes to the crosstalk amounts according to the present embodiment. Table 1 is an example of a conversion table for deriving the crosstalk amounts from the distances from the middle point CNT to the eyes according to the present embodiment.
Table 1 illustrates relationship of the position of the corresponding pupil, the incident angle to the imaging device, and the crosstalk amount relative to the distance from the middle point CNT to one eye. The number of divisions of the conversion table is not limited to the number in Table 1. For example, the values, and the intervals and the number of divisions of the distances from the middle point CNT to the first and second eyes may be changed based on the characteristics of the optical system 102, the characteristics of the imaging device 103, and the like. As necessary, the conversion table may be changed based on the wavelength of the light emitted from the illumination unit 101, the incident angle to the pixel positioned on the peripheral portion of the imaging device 103, the crosstalk amount between adjacent pixels, and characteristics of the first photoelectric conversion unit PD1 and the second photoelectric conversion unit PD2.
The crosstalk amounts may be acquired solely from the distances from the middle point CNT to the first and second eyes without determining the distances from the optical axis to the pupils and the incident angle. At this time, the conversion table determined based on the characteristics of the optical system 102, the characteristics of the imaging device 103, and the like can be used. Such a configuration makes it possible to reduce a processing amount of the calculation by the processing unit 105.
The examination apparatus OS1 according to the present embodiment may include a storage unit storing the conversion table.
In step S102, the measurement unit 104 measures the position of the subject. The measurement of the position of the subject by the measurement unit 104 can be performed using a known image processing method or ranging method. At this time, the signals acquired by the first photoelectric conversion units PD1 and the second photoelectric conversion units PD2 may be used.
In step S103, the second acquisition unit 105b acquires the information about the positions of the first and second eyes. In the present embodiment, as the information about the positions of the first and second eyes, the distances from the middle point CNT to the center of the pupil of the first eye EY1 and to the center of the pupil of the second eye EY2 are acquired.
The information about the positions of the first and second eyes is not limited thereto. For example, the distances from the imaging device 103 to the first and second eyes may be used as the information about the positions of the first and second eyes.
The information about the positions of the first and second eyes may be acquired using the above-described information about the distance to the subject. When the information about the positions of the first and second eyes is acquired using the information about the distance to the subject, it is possible to enhance accuracy of the information about the positions of the first and second eyes. The information about the positions of the first and second eyes is acquired by, for example, image processing based on the image information included in the signals converted by the photoelectric conversion units. Note that the acquisition method is not limited thereto.
The first photoelectric conversion units PD1 and the second photoelectric conversion units PD2 of the pixels IP are preferably disposed in parallel to the line segment connecting the first eye EY1 and the second eye EY2. Such a configuration makes it possible to enhance detection accuracy of the information about the positions of the first and second eyes by the second acquisition unit 105b.
In a case where the information about the distance to the subject is not used, the processing in step S102 may not be performed. The processing in step S102 and step S103 may be processed as one step.
In step S104, the estimation unit 105e estimates the positions of the pupils on the diaphragm AP of the optical system 102 or the incident angles of the first reflected light RY1 and the second reflected light RY2 to the imaging device 103 based on the information about the positions of the first and second eyes as necessary. The estimation unit 105e may estimate both of the positions of the pupils on the diaphragm AP of the optical system 102 and the incident angles of the first reflected light RY1 and the second reflected light RY2 to the imaging device 103.
In step S105, the third acquisition unit 105c acquires the crosstalk amounts generated in the first photoelectric conversion units PD1 and the second photoelectric conversion units PD2 based on the information about the positions of the first and second eyes. In the present embodiment, the crosstalk amounts are acquired using the above-described conversion table. Note that the acquisition method is not limited thereto. The crosstalk amounts may be acquired using, for example, a function. Such a configuration makes it possible to determine the crosstalk amounts as continuous values without increasing a capacity of the conversion table stored in the storage unit. Therefore, it is possible to enhance accuracy of the calculation for determining the crosstalk amounts.
In step S106, the processing unit 105d acquires the correction signals based on the signals acquired by the first acquisition unit 105a and the crosstalk amounts acquired by the third acquisition unit 105c. At this time, the correction signals can be calculated using the above-described expressions (2a) and (2b). Such a configuration makes it possible to acquire signals (correction signals) that are reduced in influence of the crosstalk from the signals obtained by the photoelectric conversion. The processing unit 105 (or processing unit 105d) acquires at least one of the image information, the luminance information, and the information on the polarization states based on the correction signals. The correction signals reduced in influence of the crosstalk are acquired from the signals corresponding to the reflected light from the first eye EY1 and the second eye EY2, it is possible to enhance examination accuracy of the fixation state of the subject.
In step S107, the processing unit 105 performs processing for examining (acquiring) the fixation state of the subject based on the correction signals. The examination of the fixation state based on the information acquired using the optical system and the imaging device can be performed by using, for example, a method discussed in Japanese Unexamined Patent Application Publication (Translation of PCT Application) No. 2018-509203.
The processing in the flowchart according to the present embodiment is not limited thereto, and for example, the calculation for the eyes EY1 and EY2 may be sequentially performed in an optional order.
In place of the diaphragm AP, an optical device having an effect similar to the effect of the diaphragm AP may be disposed. More specifically, a light shielding film may be provided on an optical plane of the optical device to form the first aperture HO1 and the second aperture HO2. The light shielding member is used at the boundary between the first aperture HO1 and the second aperture HO2, which makes it possible to effectively suppress unnecessary light.
Further, the examination apparatus OS2 according to the present embodiment includes a second light source. The second light source is a light source (fixation lamp) emitting light that direct visual lines of the subject to the examination apparatus OS2. As the second light source, a display member such as a liquid crystal display and an organic EL display, and one or a plurality of point light sources such as laser diodes may be disposed. A wavelength of the light emitted from the second light source is preferably within a visible range visually recognizable by the subject during the examination. Such a configuration makes it possible to cause the subject to pay attention to (turn eyes on) the examination apparatus OS2, and to perform examination more rapidly.
The optical system 202 may include a beam splitter as necessary. The beam splitter according to the present embodiment reflects the light emitted from the second light source to the object side. On the other hand, the beam splitter allows the reflected light reflected by the retinas to pass therethrough. When the beam splitter is used, the optical system guiding the light emitted from the second light source and the optical system (imaging optical system) guiding the first reflected light RY1 and the second reflected light RY2 to the imaging device 103 can be partially made common. This makes it possible to downsize the optical system 202 and the examination apparatus OS2.
In the present embodiment, as the polarization element changing the polarization states, a ¼ wavelength plate FW and a polarization diffraction element PG are used. The ¼ wavelength plate FW separates the first reflected light RY1 into a light flux RY1R in a first phase state and a light flux RY1L in a second phase state. Likewise, the ¼ wavelength plate FW separates the second reflected light RY2 into a light flux RY2R in the first phase state and a light flux RY2L in the second phase state.
The processing unit 105 according to the present embodiment can measure birefringent properties of the retinas from change of the polarization states of the illumination light to the retinas and the reflected light from the retinas, and can separate the acquired signals into information on a plurality of polarization states for each of the right and left eyes.
In the present embodiment, the light flux in the first phase state and the light flux in the second phase state are light fluxes having a phase difference of 90 degrees from each other. The polarization element is not limited thereto, and a ⅛ wavelength plate or a ½ wavelength plate may be used. The light entering the polarization diffraction element PG is diffracted by the polarization diffraction element PG depending on the polarization state of the light. The reflected light from the retinas is detected using the polarization element, which makes it possible to measure the birefringent properties at the reflected positions.
In the present embodiment, the first reflected light RY1 and the second reflected light RY2 having passed through the different pupils can be condensed to one imaging device 103, and the first reflected light RY1 and the second reflected light RY2 can be photoelectrically converted by the imaging device 103 alone, as in the first embodiment. Therefore, the optical system can be simplified. Further, since the polarization element is disposed in the optical system 202, it is possible to change the specific polarization states of the first reflected light RY1 and the second reflected light RY2. Accordingly, the birefringent properties of the retinas can be measured, and the fixation state of the eyes can be examined with higher accuracy.
In
Although the preferred embodiments of the present disclosure are described above, the present disclosure is not limited to these embodiments, and may be variously modified and changed within the scope of the gist.
Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to embodiments, it is to be understood that the disclosure is not limited to the disclosed embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of priority from Japanese Patent Application No. 2023-036747, filed Mar. 9, 2023, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2023-036747 | Mar 2023 | JP | national |