The present disclosure relates to an observation device, an observation method, and an observation system.
Hitherto, there has been proposed, as a small and low-cost microscope, a lensless microscope (also called “lensfree microscope”) that does not use an optical lens. Such a lensless microscope includes an image sensor and a coherent light source. In the lensless microscope, the coherent light source emits light, and a plurality of inline holograms, which are obtained by light diffracted by an observation target object such as a biomaterial and the light directly emitted by the coherent light source, are photographed by changing a condition such as a distance or a wavelength. After that, an amplified image and a phase image of the observation target object are reconstructed by light propagation calculation, and those images are provided to a user.
In such a lensless microscope, hitherto, a combination of a light emitting diode (LED) and a space aperture (e.g., pinhole or single-core optical fiber) has been used as the coherent light source. For example, NPL 1 described below discloses a lensless microscope using a coherent light source that is a combination of a light emitting diode and a pinhole.
However, in the combination of an LED and a space aperture as disclosed in NPL 1 described above, a large proportion of light emitted by the LED cannot pass through the space aperture, leading to low energy utilization efficiency. As a result, the cost of a power source part or the like increases, and an original advantage of the lensless microscope cannot be obtained sufficiently.
Further, when an inline hologram is obtained by changing a separation distance between an image sensor and an observation target object, the lensless microscope as disclosed in NPL 1 described above performs control of changing a position of a stage at which the observation target object is placed, for example. However, when the accuracy of determining the position of the stage is low, a deviation in position of the stage causes an error, resulting in decrease in accuracy of the obtained image.
Further, when a plurality of lights having different wavelengths are used, a difference in angle of a ray becomes larger as a distance from a light emission point becomes larger, leading to such a concern that distortion occurs in the recorded inline hologram and reconstruction of the image has an error. In order to prevent distortion due to the difference in angle of a ray, first, it is conceivable to adopt a solution such as introduction of a plurality of lights by the same optical fiber and combination of the plurality of lights by using a dichroic mirror. However, when such a solution is used, the entire microscope becomes larger and the cost increases, which contradicts such an advantage that the lensless microscope is small and low in cost.
In view of the above-mentioned circumstances, the present disclosure proposes an observation device, an observation method, and an observation system, which are capable of obtaining a more accurate image by improving the utilization efficiency of light energy while at the same time suppressing with a simpler method distortion that may occur in an inline hologram when a plurality of lights having different wavelengths are used.
According to the present disclosure, there is provided an observation device including a light source part in which a plurality of light emitting diodes having different light emission wavelengths with a length of each light emission point being smaller than 100λ (λ: light emission wavelength) are arranged such that a separation distance between the adjacent light emitting diodes is equal to or smaller than 100λ (λ: light emission wavelength); and an image sensor installed so as to be opposed to the light source part with respect to an observation target object.
Further, according to the present disclosure, there is provided an observation method including: applying light to an observation target object for each light emission wavelength by a light source part in which a plurality of light emitting diodes having different light emission wavelengths with a length of each light emission point being smaller than 100λ (λ: light emission wavelength) are arranged such that a separation distance between the adjacent light emitting diodes is equal to or smaller than 100λ (λ: light emission wavelength); and photographing an image of the observation target object for each light emission wavelength by an image sensor installed so as to be opposed to the light source part with respect to the observation target object.
Further, according to the present disclosure, there is provided an observation system including: a light source part in which a plurality of light emitting diodes having different light emission wavelengths with a length of each light emission point being smaller than 100λ (λ: light emission wavelength) are arranged such that a separation distance between the adjacent light emitting diodes is equal to or smaller than 100λ (λ: light emission wavelength); an image sensor installed so as to be opposed to the light source part with respect to an observation target object; and a calculation processing part for executing calculation processing of obtaining an image of the observation target object by using a photographed image for each light emission wavelength, which is generated by the image sensor.
According to the present disclosure, a light source part including a plurality of light emitting diodes installed so as to satisfy a predetermined condition applies light to an observation target object, and an inline hologram that is caused by the applied light is photographed by an image sensor installed so as to be opposed to the light source part with respect to the observation target object.
As described above, according to the present disclosure, it is possible to obtain a more accurate image by improving a utilization efficiency of light energy while at the same time suppressing with a simpler method distortion that may occur in an inline hologram when a plurality of lights having different wavelengths are used.
The above-mentioned effect is not necessarily given in a limited manner, and in addition to or instead of the above-mentioned effect, any effect shown in this specification or other effects that may be grasped based on this specification may be exhibited.
In the following, description is given in detail of a preferred embodiment of the present disclosure with reference to the attached drawings. In this specification and the drawings, components having substantially the same functional configuration are assigned with the same reference numeral, and redundant description thereof is omitted.
Description is given in the following order.
In the following, description is given in detail of an observation device according to an embodiment of the present disclosure with reference to
First, description is given in detail of an overall configuration of an observation device according to this embodiment and a hologram acquisition part included in the observation device according to this embodiment with reference to
Overall Configuration of Observation Device
An observation device 1 according to this embodiment is a device to be used for observing a predetermined observation target object, and is a device for reconstructing an image of the observation target object by using a hologram (more specifically, inline hologram) image that occurs due to interference between light that has passed through the observation target object and light diffracted by the observation target object.
Regarding the observation target object focused on by the observation device 1 according to this embodiment, any object can be set as the observation target object as long as the object transmits light used for observation to some extent and enables interference between light that has passed through the observation target object and light diffracted by the observation target object. Such an observation target object may include, for example, a phase object for which light having a predetermined wavelength used for observation can be considered to be transparent to some extent, and such a phase object may include, for example, various kinds of biomaterials such as a cell of a living thing, biological tissue, a sperm cell, an egg cell, a fertilized egg, or a microbe.
In the following, description is given based on an exemplary case in which a biomaterial such as a cell, which is an example of the observation target object, exists in a predetermined sample holder.
As illustrated in
The hologram acquisition part 10 according to this embodiment acquires a hologram image of an observation target object C existing in a sample holder H placed at a predetermined position of an observation stage St under control by the calculation processing part 20 described later. The hologram image of the observation target object C acquired by the hologram acquisition part 10 is output to the calculation processing part 20 described later. A detailed configuration of the hologram acquisition part 10 having such a function is described later again.
The calculation processing part 20 integrally controls the processing of acquiring a hologram image by the hologram acquisition part 10. Further, the calculation processing part 20 executes a series of processing of reconstructing an image of the focused observation target object C by using the hologram image acquired by the hologram acquisition part 10. The image acquired by such a series of processing is presented to the user of the observation device 1 as an image that has photographed the focused observation target object C. A detailed configuration of the calculation processing part 20 having such a function is described later again.
In the above, the overall configuration of the observation device 1 according to this embodiment has been briefly described.
The observation device 1 according to this embodiment can also be realized as an observation system constructed by a hologram acquisition unit including the hologram acquisition part 10 having a configuration as described later in detail and a calculation processing unit including the calculation processing part 20 having a configuration as described later in detail.
Hologram Acquisition Part
Next, description is given in detail of the hologram acquisition part 10 in the observation device 1 according to this embodiment with reference to
As illustrated in
Illumination light from the light source part 11 is applied to the observation target object C supported in the sample holder H placed on the observation stage St. As schematically illustrated in
Further, the observation stage St has a region having a light transmission property of transmitting illumination light of the light source part 11, and the sample holder H is placed on such a region. The region having a light transmission property provided in the observation stage St may be formed by a glass or the like, for example, or may be formed by an opening that passes through the upper surface and bottom surface of the observation stage St along the z-axis direction.
When the illumination light is applied to the observation target object C, such illumination light is divided into transmitted light H1 passing though the observation target object C and diffracted light H2 diffracted by the observation target object C. Such transmitted light H1 and diffracted light H2 interfere with each other, so that a hologram (inline hologram) image of the observation target object C is generated on a sensor surface S2 of the image sensor 13 installed so as to be opposed to the light source part 11 with respect to the observation target object C. In this description, in the observation device 1 according to this embodiment, Z represents the length of a separation distance between the support surface S1 and the sensor surface S2, and L represents the length of a separation distance between the light source part 11 (more specifically, emission port of illumination light) and the image sensor 13 (sensor surface S2). In this embodiment, the transmitted light H1 functions as reference light for generating a hologram of the observation target object C. The hologram image (hereinafter also referred to as “hologram”) of the observation target object C generated in this manner is output to the calculation processing part 20.
As illustrated in
In this manner, the hologram acquisition part 10 according to this embodiment does not use a space aperture unlike a conventional lensless microscope, and thus can use energy of illumination light applied by the light source part 11 more efficiently.
In the observation device 1 according to this embodiment, the light source part 11 applies a plurality of illumination lights having different wavelengths. Such a light source part 11 includes a plurality of light emitting diodes (LED) having different light emission wavelengths and enabling application of partially coherent light in order to apply illumination lights having different wavelengths. Thus, the above-mentioned bandpass filter 15 functions as a multi-bandpass filter designed to have one or a plurality of transmission wavelength bands so as to handle the light emission wavelength of each LED.
The light emission wavelength of each LED constructing the light source part 11 is not particularly limited as long as the LEDs have different light emission wavelengths, and it is possible to use light having any light emission peak wavelength belonging to any wavelength band. The light emission wavelength (light emission peak wavelength) of each LED may belong to an ultraviolet light band, a visible light band, or a near-infrared band, for example. Further, each LED constructing the light source part 11 to be used may be any publicly known LED as long as the LED satisfies a condition on two types of lengths described later in detail.
In the light source part 11 according to this embodiment, the number of LEDs is not particularly limited as long as the number is equal to or larger than two. The size of the light source part 11 becomes larger as the number of LEDs becomes larger, and thus the light source part 11 is preferred to include three LEDs having different light emission wavelengths in consideration of reduction in size of the observation device 1. In the following, description is given based on an exemplary case in which the light source part 11 includes three LEDs having different light emission wavelengths.
In the light source part 11 according to this embodiment, the length of a light emission point of each LED constructing the light source part 11 is smaller than 100λ (λ: light emission wavelength). Further, each LED constructing the light source part 11 is arranged such that a separation distance between adjacent LEDs is equal to or smaller than 100λ (λ: light emission wavelength). At this time, as the light emission wavelength a serving as a reference for the length of a light emission point and the separation distance between LEDs, a shortest peak wavelength is used among peak wavelengths of light emitted by each LED included in the light source part 11.
The LEDs are adjacent to one another such that the length of each light emission point is smaller than 100λ and the separation distance between adjacent LEDs is equal to or smaller than 100λ, which enables the observation device 1 according to this embodiment to obtain a more accurate image by enabling cancellation of distortion between holograms due to deviation in light emission point of the LED through use of simple shift correction described later in detail. A group of LEDs satisfying the above-mentioned two conditions are hereinafter also referred to as “micro LED”. When the length of each light emission point is equal to or larger than 100λ, or the separation distance between adjacent LEDs is larger than 100λ, deviation in light emission point between LEDs becomes significant, and even when shift correction as described later in detail is performed, distortion between holograms cannot be cancelled. The length of each light emission point is preferably smaller than 80λ, and more preferably smaller than 40λ. Further, the separation distance between adjacent LEDs is preferably equal to or smaller than 80λ, and more preferably equal to or smaller than 60λ. The length of each light emission point and the separation distance between adjacent LEDs are desired to be smaller without limitation, and a lower limit value is not particularly limited.
Further, the length of the above-mentioned separation distance is more preferably equal to or smaller than five times the length of the above-mentioned light emission point. The length of the light emission point and the length of the separation distance have the above-mentioned relationship, which enables distortion between holograms to be cancelled more reliably and an image with a further higher quality to be obtained. The length of the above-mentioned separation distance is more preferably equal to or smaller than one and a half times the length of the above-mentioned light emission point.
For example, as illustrated in
Further, for example, as illustrated in
In the light source part 11 illustrated in
The light source part 11 having the above-mentioned configuration sequentially turns on each LED 101 and causes a hologram at each light emission wavelength under control by the calculation processing part 20.
Referring back to
The image sensor 13 according to this embodiment records a hologram (inline hologram) of the observation target object C, which has occurred on the sensor surface S2 illustrated in
The hologram acquisition part 10 according to this embodiment records only the light intensity distribution (square value of amplitude) of a hologram on the sensor surface S2, and does not record the distribution of phases. However, the calculation processing part 20 executes a series of image reconstruction processing as described later in detail to reproduce the distribution of phases of the hologram.
Further, the bandpass filter 15 according to this embodiment as illustrated in
As described above, the hologram acquisition part 10 according to this embodiment can acquire a more accurate hologram image of the observation target object with an extremely small number of parts by including an image sensor and an LED for which the length and pitch of the light emission point satisfy a specific condition, and further including a bandpass filter as necessary.
In the above, the configuration of the hologram acquisition part 10 in the observation device 1 according to this embodiment has been described in detail with reference to
Next, description is given in detail of the calculation processing part included in the observation device 1 according to this embodiment with reference to
The calculation processing part 20 according to this embodiment integrally controls the activation state of the hologram acquisition part 10 included in the observation device 1 according to this embodiment. Further, the calculation processing part 20 uses a hologram image of the observation target object C acquired by the hologram acquisition part 10 to execute a series of processing of reconstructing an image of the observation target object C based on such a hologram image.
Overall Configuration of Calculation Processing Part
As schematically illustrated in
The hologram acquisition control part 201 is realized by, for example, a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), an input device, and a communication device. The hologram acquisition control part 201 integrally controls the activation state of the hologram acquisition part 10 based on observation condition information on various kinds of observation conditions of the hologram acquisition part 10 input through a user operation. Specifically, the hologram acquisition control part 201 controls the plurality of LEDs 101 provided in the light source part 11 of the hologram acquisition part 10, and controls the lighting state of each LED 101. Further, the hologram acquisition control part 201 controls the activation state of the image sensor 13 to generate a hologram (inline hologram) image of the observation target object C for each light emission wavelength on the sensor surface S2 of the image sensor 13 while at the same time synchronizing the activation state with the lighting state of each LED 101.
Further, the hologram acquisition control part 201 can also control the position of the observation stage St provided in the hologram acquisition part 10 along the z-axis direction. The hologram acquisition control part 201 may output the observation condition information and various kinds of information on the activation state of the hologram acquisition part 10 to the data acquisition part 203 and the image calculation part 205, and cause the data acquisition part 203 and the image calculation part 205 to use those pieces of information for various kinds of processing.
The data acquisition part 203 is realized by, for example, a CPU, a ROM, a RAM, and a communication device. The data acquisition part 203 acquires, from the hologram acquisition part 10, image data on the hologram image of the observation target object C for each light emission wavelength, which has been acquired by the hologram acquisition part 10 under control by the hologram acquisition control part 201. When the data acquisition part 203 has acquired image data from the hologram acquisition part 10, the data acquisition part 203 outputs the acquired image data on the hologram image to the image calculation part 205 described later. Further, the data acquisition part 203 may record the acquired image data on the hologram image into the storage part 211 described later as history information in association with time information on, for example, a date and time at which such image data has been acquired.
The image calculation part 205 is realized by, for example, a CPU, a ROM, and a RAM. The image calculation part 205 uses the image data on the hologram image of the observation target object C for each light emission wavelength, which is output from the data acquisition part 203, to execute a series of image calculation processing of reconstructing an image of the observation target object C. A detailed configuration of such an image calculation part 205 and details of the image calculation processing executed by the image calculation part 205 are described later again.
The output control part 207 is realized by, for example, a CPU, a ROM, a RAM, an output device, and a communication device. The output control part 207 controls output of image data on the image of the observation target object C calculated by the image calculation part 205. For example, the output control part 207 may cause the output device such as a printer to output the image data on the observation target object C calculated by the image calculation part 205 for provision to the user as a paper medium, or may cause various kinds of recording media to output the image data. Further, the output control part 207 may cause various kinds of information processing devices such as an externally provided computer, server, and process computer to output the image data on the observation target object C calculated by the image calculation part 205 so as to share the image data. Further, the output control part 207 may cause a display device such as various kinds of displays included in the observation device 1 or a display device such as various kinds of displays provided outside of the observation device 1 to output the image data on the observation target object C calculated by the image calculation part 205 in cooperation with the display control part 209 described later.
The display control part 209 is realized by, for example, a CPU, a ROM, a RAM, an output device, and a communication device. The display control part 209 performs display control when the image of the observation target object C calculated by the image calculation part 205 or various kinds of information associated with the image are displayed on an output device such as a display included in the calculation processing part 20 or an output device provided outside of the calculation processing part 20, for example. In this manner, the user of the observation device 1 can grasp various kinds of information on the focused observation target object on the spot.
The storage part 211 is realized by, for example, a RAM or a storage device included in the calculation processing part 20. The storage part 211 stores, for example, various kinds of databases or software programs to be used when the hologram acquisition control part 201 or the image calculation part 205 executes various kinds of processing. Further, the storage part 211 appropriately records, for example, various kinds of settings information on, for example, the processing of controlling the hologram acquisition part 10 executed by the hologram acquisition control part 201 or various kinds of image processing executed by the image calculation part 205, or progresses of the processing or various kinds of parameters that are required to be stored when the calculation processing part 20 according to this embodiment executes some processing. The hologram acquisition control part 201, the data acquisition part 203, the image calculation part 205, the output control part 207, the display control part 209, or the like can freely execute processing of reading/writing data from/to the storage part 211.
In the above, the overall configuration of the calculation processing part 20 included in the observation device 1 according to this embodiment has been described with reference to
Configuration of Image Calculation Part
The image calculation part 205 uses image data on the hologram image of the observation target object C for each light emission wavelength to execute a series of image calculation processing of reconstructing an image of the observation target object C. As schematically illustrated in
The propagation distance calculation part 221 is realized by, for example, a CPU, a ROM, and a RAM. The propagation distance calculation part 221 uses a digital focus technology (digital focusing) utilizing Rayleigh-Sommerfeld diffraction integral to calculate a specific value of a separation distance Z (separation distance between support surface S1 and sensor surface S2) illustrated in
In this case, the hologram acquisition control part 201 acquires, in advance, a focus image a(x, y, z) at each light emission wavelength while at the same time controlling the hologram acquisition part 10 to change the z-coordinate position of the observation stage St. In this case, a(x, y, 0) corresponds to a hologram image gλn generated on the sensor surface S2.
The propagation distance calculation part 221 first uses a plurality of focus images having different z-coordinate positions to calculate a difference value f(z+Δz/Z) of luminance between focus images represented by the following expression (101). As can be understood from the following expression (101), a total sum of luminance differences at respective points forming image data is calculated for the entire image. Such a total sum can be used to obtain an output curve representing how the luminance value has changed along the z-axis direction (optical-path direction).
Next, the propagation distance calculation part 221 calculates a differential value f(z) of f(z+Δz/Z) calculated based on the expression (101) with respect to a variable z. Then, a z-position that gives the peak of the obtained differential value f(z) is a focus position of the focused hologram image g. Such a focus position is set as a specific value of the separation distance Z illustrated in
The propagation distance calculation part 221 outputs information on the propagation distance Z obtained in this manner to the preprocessing part 223 and the reconstruction processing part 225 at a subsequent stage.
In the above, the case of the propagation distance calculation part 221 calculating the separation distance Z by using the digital focus technology utilizing Rayleigh-Sommerfeld diffraction integral has been described. However, the propagation distance calculation part 221 may calculate the propagation distance Z based on the mechanical accuracy (accuracy of positioning observation stage St) of the hologram acquisition part 10.
The preprocessing part 223 is realized by, for example, a CPU, a ROM, and a RAM. The preprocessing part 223 executes, for the photographed image (namely, hologram image gin) for each light emission wavelength, preprocessing including at least shift correction of the image that depends on a positional relationship among the plurality of light emitting diodes. As illustrated in
The gradation correction part 231 is realized by, for example, a CPU, a ROM, and a RAM. The gradation correction part 231 performs gradation correction (e.g., dark level correction and inverse gamma correction) of the image sensor 13, and executes processing of returning an image signal based on the hologram images gλ1, gλ2, gλ3 output from the data acquisition part 203 to a linear state. Specific details of the processing of gradation correction to be executed are not particularly limited, and various kinds of publicly known details of processing can be appropriately used. The gradation correction part 231 outputs the hologram images gλ1, gλ2, gλ3 after gradation correction to the upsampling part 233 at a subsequent stage.
The upsampling part 233 is realized by, for example, a CPU, a ROM, and a RAM. The upsampling part 233 upsamples image signals of the hologram images gλ1, gλ2, gλ3 after gradation correction. The hologram acquisition part 10 according to this embodiment is constructed as a so-called lensless microscope, and thus the resolution may exceed a Nyquist frequency of the image sensor 13. Thus, in order to exhibit the maximum performance, the hologram images gλ1, gλ2, gλ3 after gradation correction are subjected to upsampling processing. The upsampling processing to be executed specifically is not particularly limited, and various kinds of publicly known upsampling processing can be used appropriately.
The image shift part 235 is realized by, for example, a CPU, a ROM, and a RAM. The image shift part 235 executes, for the hologram image (more specifically, hologram image subjected to the above-mentioned gradation correction processing and upsampling processing) for each light emission wavelength, which has been acquired by the hologram acquisition part 10, shift correction of the image that depends on the positional relationship among the plurality of light emitting diodes.
More specifically, the image shift part 235 executes shift correction so as to cancel a deviation in position of the hologram image due to the position at which each LED 101 is provided. Such shift correction is performed by shifting spatial coordinates (x, y, z) defining the pixel position of the hologram image in a predetermined direction.
Specifically, the image shift part 235 selects one LED 101 serving as a reference from among the plurality of LEDs 101, and shifts the spatial coordinates (x, y, z) of hologram images photographed by using remaining LEDs 101 other than the reference LED among the plurality of LEDs 101 in a direction of a hologram image photographed by using the reference LED. The movement amount (shift amount) at the time of performing such shifting is determined based on the amount of positional deviation between focused LEDs 101 and a magnification determined based on a distance (L−Z) between the light source part 11 and the support surface S1 and a distance Z between the support surface S1 and the sensor surface S2. The distance Z is the propagation distance calculated by the propagation distance calculation part 221.
For example, as schematically illustrated in
In the expression (111) given above, δ represents a correction amount, L represents a distance between the light source part and the image sensor, Z represents a distance between the observation target object and the image sensor, and p represents a distance between the light emitting diodes.
In the above description, the LED 101B positioned at the center is set as a reference in
Further, for example, as illustrated in
For example, when the LED 101A and the LED 101B are focused on, the amount of deviation between the LED 101A and the LED 101B in the x-axis direction is (p/2), and the amount of deviation in the y-axis direction is {(30.5/2)×p}. Thus, the image shift part 235 corrects the spatial coordinates (x, y, z) defining the pixel position of the hologram image obtained by using the LED 101B to (x+(p/2)x{Z/(L−Z)}, y−{(30.5/2)×p}x{Z/(L−Z)}, z). Similarly, when the LED 101A and the LED 101C are focused on, the image shift part 235 corrects the spatial coordinates (x, y, z) defining the pixel position of the hologram image obtained by using the LED 101B to (x−(p/2)x{Z/(L−Z)}, y−{(30.5/2)×p}x{Z/(L−Z)}, z).
Also in the example illustrated in
In shift correction as described above, the shift amount is calculated in a length unit system of parameters p, Z, L. Thus, the image shift part 235 is preferred to ultimately convert the correction amount to an amount in a pixel unit system based on the pixel pitch of the image sensor 13.
Shift correction as described above can be realized only when the light source part 11 according to this embodiment uses a micro LED in a state in which two conditions on the length as described above are satisfied. In a case where the two conditions on the length as described above are not satisfied in the light source part 11, the positional deviation between hologram images cannot be cancelled even when the spatial coordinates defining the pixel position of the hologram image are shifted based on the idea as described above.
After the image shift part 235 has executes shift correction of the hologram image subjected to gradation correction and upsampling processing as described above, the image shift part 235 outputs the hologram image after shift correction to the image end processing part 237 at a subsequent stage.
The above description has been given with a focus on the case in which the position of the reference LED 101 is selected and the spatial coordinates defining the pixel position forming the hologram image are shifted to such a position of the LED 101. However, the image shift part 235 may not select the position of the reference LED 101, but select a reference position such as the center of gravity of positions at which the plurality of LEDs 101 are arranged, and shift the spatial coordinates defining the pixel position forming the hologram image to such a position, for example.
The image end processing part 237 is realized by, for example, a CPU, a ROM, and a RAM. The image end processing part 237 executes processing for an image end of the hologram images gλ1, gλ2, gλ3 after shifting of the image. A boundary condition specifying the pixel value=0 outside the input value is applied to the image end, which is similar to a condition of existence of a knife edge on the image end. As a result, diffracted light occurs and causes a new artifact. In view of this, the image end processing part 237 prepares pixels twice as much as those of an original image in each of the vertical and horizontal directions, and executes processing of embedding a luminance value at the edge portion in the outside of the original image arranged at the center. In this manner, it is possible to prevent a diffraction fringe that occurs due to the processing of the image end from influencing the range of the original image. After the image end processing part 237 has executed the processing as described above, the image end processing part 237 outputs the hologram images gλ1, gλ2, gλ3 after execution of the processing to the initial complex amplitude generation part 239 at a subsequent stage.
The initial complex amplitude generation part 239 is realized by, for example, a CPU, a ROM, and a RAM. The initial complex amplitude generation part 239 sets, for the hologram image gλ1, gλ2, gλ3, a square root of the pixel value (luminance value) as the real part of a complex amplitude of the hologram and 0 as the imaginary part thereof to obtain an initial value of the complex amplitude. In this manner, the initial complex amplitudes of the hologram image gλ1, gλ2, gλ3 having only the amplitude component are generated. The above-mentioned pixel value (luminance value) is a pixel value (luminance value) subjected to various kinds of preprocessing as described above. In this manner, a preprocessed image to be subjected to a series of reconstruction processing by the reconstruction processing part 225 is generated.
After the initial complex amplitude generation part 239 has generated the preprocessed image as described above, the initial complex amplitude generation part 239 outputs the generated preprocessed image to the reconstruction processing part 225.
In the above, the configuration of the preprocessing part 223 according to this embodiment has been described with reference to
As illustrated in
Specifically, the reconstruction processing part 225 reproduces the lost phase component by propagating the hologram image through optical wave propagation calculation by the reconstruction calculation part 225A and repeatedly replacing those amplitude components by the amplitude replacement part 225B. At this time, the reconstruction processing part 225 repeatedly executes a cycle of replacing the amplitude components of the complex amplitude distribution of the hologram image obtained from the result of propagation calculation with the actually measured amplitude component such that only the phase component remains.
Meanwhile, Maxwell's equations are reduced to a wave equation in a lossless, isotropic, and uniform medium. Further, each component of the electric field and the magnetic field satisfies a Helmholtz equation represented by the following expression (201) in monochromatic light that does not consider time evolution. In the following expression (201), g(x, y, z) represents a complex amplitude component of an electromagnetic vector component, and k represents a wave number represented by the following expression (203). “Propagation of the hologram image” according to this embodiment refers to a series of processing of using a boundary condition g(x, y, Z) (namely, complex amplitude component of hologram image on sensor surface S2) on the hologram image given for a specific plane (propagation source plane) to obtain a solution of the Helmholtz equation for another plane (support surface S1 in this embodiment). Such propagation processing is called “angular spectrum method” (plane wave expansion method).
When a plane parallel to the propagation source is considered to be the support surface S1, and the solution of the Helmholtz equation on such a support surface S1 is set as g(x, y, 0), the exact solution is given by the following expression (205), which is also called “Rayleigh-Sommerfeld diffraction integral”. In the following expression (205), r′ is given by the following expression (207).
It takes time to calculate the integral form as shown in the above expression (205), and thus in this embodiment, an expression given by the following expression (209), which is obtained by Fourier-transforming both sides of the above expression (205), is adopted. In the following expression (209), G represents the Fourier transform of a complex amplitude component g, and F−1 represents inverse Fourier transform. Further, u, v, w represent spatial frequency components in the x-direction, the y-direction, and the z-direction, respectively. In this case, u and v are associated with corresponding components of a wave number vector k=kx·x+ky·y+kz·z (x, y, z are unit vectors) and u=kx/2π and v=ky/2π, whereas w is given by the following expression (211).
As described later, the reconstruction processing part 225 according to this embodiment uses the complex amplitude distribution of the hologram propagated from the support surface S2 to the support surface S1 at a predetermined wavelength to recalculate the complex amplitude distribution of the hologram to be propagated from the support surface S1 to the sensor surface S2 at a wavelength different from the above-mentioned wavelength. Thus, in this embodiment, the following expression (213), which is replaces the above expression (209), is adopted.
[Math. 6]
g
λ2(x,y,z)=F−1{Gλ1(u,v,z)exp[−i2π(w2(u,v)−w1(u,v))z]} expression (213)
The above expression (213) means using the complex amplitude distribution of the hologram gλ1 propagated from the sensor surface S2 to the support surface S1 at the wavelength λ1 to calculate the complex amplitude distribution of the hologram gλ2 to be propagated from the support surface S1 to the sensor surface S2 at the wavelength λ2.
In this embodiment, the reconstruction calculation part 225A repeatedly calculates optical wave propagation between the sensor surface S2 and the support surface S1 based on propagation calculation expressions of the above expressions (209) and (213). For example, when the amplitude replacement part 225B does not execute amplitude replacement on the support surface S1 as described later, the reconstruction calculation part 225A executes propagation calculation based on the expression (213). On the contrary, when the amplitude replacement part 225B executes amplitude replacement, the amplitude replacement part 225B replaces the amplitude components of the complex amplitude distribution of the hologram gλ1 propagated from the sensor surface S2 to the support surface S1 at the wavelength λ1 with a predetermined amplitude representative value based on the above expression (209), and calculates the complex amplitude distribution of the hologram gλ2 to be propagated from the support surface S1 to the sensor surface S2 at the wavelength λ2.
In the following, specific description is given of a series of propagation calculation processing to be executed by the reconstruction processing part 225 with reference to
First, among preprocessed images output from the preprocessing part 223, an input image Iin1 is read (Step S101), and the reconstruction calculation part 225A executes first optical wave propagation calculation of propagating the complex amplitude distribution (light intensity distribution) of the hologram image gλ1 from the sensor surface S2 to the support surface S1 (Step S103). The complex amplitude distribution of the hologram image gλ1 output from the preprocessing part 223 is represented by the following expression (221), and the complex amplitude distribution of the hologram image gλ1 propagated to the support surface S1 is represented by the following expression (223).
The complex amplitude distribution of the hologram gλ1 represented by the following expression (223) is the complex amplitude distribution of the hologram image gλ1 obtained as a result of the above-mentioned first optical wave propagation calculation. The complex amplitude distribution of the hologram image in this embodiment is the complex amplitude distribution of light forming the hologram, and has the same meaning in the following description.
Further, in the following expression (221), a(x, y, z) represents the amplitude component, and exp(iφ(x, y, z)) represents the phase component (set initial value). Similarly, in the following expression (223), A′(x, y, 0) represents the amplitude component, and exp(iφ′(x, y, 0)) represents the phase component.
g
λ1(x,y,z)=A(x,y,z)exp(iφ(x,y,z)) expression (221)
g
λ1(x,y,0)=A′(x,y,0)exp(iφ′(x,y,0)) expression (223)
Next, the amplitude replacement part 225B extracts the amplitude components A′ of the complex amplitude distribution of the hologram image gλ1 propagated to the support surface S1 at the wavelength a λ1, and calculates an average value Aave of the amplitude components A′. Then, the amplitude replacement part 225B replaces the amplitude components A′ of the complex amplitude distribution of the hologram image gλ1 with the average value Aave on the support surface S1 as one procedure of second optical wave propagation calculation described later (Step S105).
As a result, the amplitude component of the complex amplitude distribution of the hologram image gλ1 is smoothed, and a calculation load in the subsequent repetition processing is reduced. The hologram image gλ1 for which the amplitude components A′ are replaced with the average value Aave is represented by the following expression (225). Further, the average value Aave after replacement is represented by the following expression (227). A parameter N in the following expression (227) is the total number of pixels.
g
λ1(x,y,0)=Aave·exp(iφ′(x,y,0)) expression (225)
A
ave=1/N(ΣΣA′(x,y,0)) expression (227)
The average value Aave according to this embodiment is typically an average value of the amplitude components A′ in the complex amplitude distribution (expression (223)) obtained as a result of the above-mentioned first optical wave propagation calculation. Such an average value can be set to be a proportion (cumulative average) of the total sum of amplitude components corresponding to the respective pixels of the hologram image gλ1(x, y, 0) to the number N of pixels of the hologram image gλ1 (x, y, 0).
Further, in the above-mentioned example, the amplitude components A′ are replaced with the average value Aave. Instead, a predetermined amplitude representative value of the amplitude components A′ of the complex amplitude distribution (expression (223)) of the hologram image gλ1 can also be used. For example, the amplitude replacement part 225B may replace the amplitude components A′ with a median of the amplitude components A′ other than the average value Aave, or may replace the amplitude components A′ with a low-pass filter transmission component of the amplitude components A′.
Next, the reconstruction calculation part 225A executes the second optical wave propagation calculation of propagating the complex amplitude distribution of the hologram image gλ1 for which the amplitude components A′ are replaced with the average value Aave from the support surface S1 to the sensor surface S2 at the wavelength λ2 (Step S107). In other words, the complex amplitude distribution of the hologram gλ2 to be propagated from the complex amplitude distribution of the hologram image gλ1 represented by the above expression (225) to the sensor surface S2 at the wavelength λ2 is obtained by propagation calculation. Such a complex amplitude distribution of the hologram image gλ2 is represented by the following expression (229).
g
λ2(x,y,z)=A″(x,y,z)exp(iφ″(x,y,z)) expression (229)
Next, the amplitude replacement part 225B replaces the amplitude components A″ of the complex amplitude distribution of the hologram image gλ2 propagated at the wavelength λ2 with actually measured values Aλ2 of the amplitude components A″ on the sensor surface S2 as one procedure of the above-mentioned first optical wave propagation calculation (Step S109). Those actually measured values Aλ2 are amplitude components extracted from the hologram image gλ2 acquired as an input image Iin2.
The hologram image gλ2 for which the amplitude components A″ are replaced with the actually measured values Aλ2 on the sensor surface S2 is represented by the following expression (231). As a result, it is possible to obtain the hologram image gλ2 having a phase component. In the following expression (231), Aλ2(x, y, z) represents the amplitude component, and exp(iφ″ (x, y, z)) represents the reproduced phase component.
g
λ2(x,y,z)=Aλ2(x,y,z)exp(iφ″(x,y,z)) expression (231)
In this manner, the reconstruction processing part 225 executes the first light propagation calculation of propagating the complex amplitude distribution having the light intensity distribution of the hologram image of the observation target object C acquired on the sensor surface S2 from the sensor surface S2 to the support surface S1, and executes the cycle of the second light propagation calculation of propagating the complex amplitude distribution obtained as a result of the first light propagation calculation from the support surface S1 to the sensor surface S2.
In this embodiment, as illustrated in
Next, the reconstruction calculation part 225A determines whether the above-mentioned propagation calculation has converged (Step S135). A specific technique of convergence determination is not particularly limited, and various kinds of publicly known techniques can be used. When the reconstruction calculation part 225A has determined that a series of calculation processing for reproducing the phase have not converged (Step S135-NO), the reconstruction calculation part 225A returns to Step S103 to restart a series of calculation processing for reproducing the phase. On the contrary, when the reconstruction calculation part 225A has determined that a series of calculation processing for reproducing the phase have converged (Step S135—YES), as illustrated in
In the above description, a time point at which the series of calculation processing for reproducing the phase are finished is determined by convergence determination, but in this embodiment, not the above-mentioned convergence determination but whether or not the series of calculation as described above have been executed a defined number of times may be used to determine the time point at which the series of calculation processing are finished. In this case, the number of times of repeating calculation is not particularly limited, but it is preferred to set the number of times to about ten to one hundred, for example.
Further, when the reconstruction calculation part 225A obtains a reconstructed image, the reconstruction calculation part 225A can obtain the amplified image of the focused observation target object C by calculating Re2+Im2 through use of a real part (Re) and an imaginary part (Im) of the complex amplitude distribution obtained last, and can obtain the phase image of the focused observation target object C by calculating A tan(Im/Re).
In
The series of processing as described above is executed to enable the reconstruction processing part 225 to calculate the amplified image and phase image of the focused observation target object C. An example of the phase image of the observation target object obtained in this manner is illustrated in
In the above, the configuration of the image calculation part 205 according to this embodiment has been described in detail.
In the above, an example of the function of the calculation processing part 20 according to this embodiment has been described. Each component described above may be constructed by using a general-purpose part or circuit, or may be constructed by hardware dedicated to the function of each component. Further, a CPU or the like may execute all the functions of each component. Thus, the configuration to be used can be changed appropriately depending on the technological level at the time of carrying out this embodiment.
A computer program for realizing each function of the calculation processing part according to this embodiment as described above can be created, and implemented on a personal computer, for example. Further, a computer-readable recording medium having stored thereon such a computer program can also be provided. The recording medium is, for example, a magnetic disk, an optical disc, a magneto-optical disk, or a flash memory. Further, the above-mentioned program may be distributed via a network or the like without using a recording medium.
Hardware Configuration of Calculation Processing Part
Next, description is given in detail of the hardware configuration of the calculation processing part 20 according to an embodiment of the present disclosure with reference to
The calculation processing part 20 mainly includes a CPU 901, a ROM 903, and a RAM 905. The calculation processing part 20 further includes a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925.
The CPU 901 functions as a calculation processing device and a control device, and controls an entire or part of operation of the calculation processing part 20 in accordance with various kinds of programs recorded in the ROM 903, the RAM 905, the storage device 919, or a removable recording medium 927. The ROM 903 stores a program, a calculation parameter, or other information to be used by the CPU 901. The RAM 905 temporarily stores, for example, the program to be used by the CPU 901 or a parameter that changes as appropriate through execution of the program. Those components are connected to one another via the host bus 907 constructed by an internal bus such as a CPU bus.
The host bus 907 is connected to the external bus 911 such as a peripheral component interconnect/interface (PCI) bus via the bridge 909.
The input device 915 is operation means to be operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, or a lever. Further, the input device 915 may be, for example, remote control means (so-called remote controller) that uses an infrared ray or other radio waves, or may be an external connection device 929 such as a mobile phone or PDA that supports operation of the calculation processing part 20. Further, the input device 915 is constructed by, for example, an input control circuit for generating an input signal based on information input by the user using the above-mentioned operation means, and outputting the generated input signal to the CPU 901. The user can input various kinds of data to the calculation processing part 20 or instruct the calculation processing part 20 to execute a processing operation by operating the input device 915.
The output device 917 is constructed by a device that can notify the user of acquired information visually or aurally. Such a device includes a display device such as a CRT display device, a liquid crystal display device, a plasma display device, an EL display device, or a lamp, a sound output device such as a speaker or headphones, a printer device, a mobile phone, or a facsimile. The output device 917 outputs, for example, a result obtained by various kinds of processing executed by the calculation processing part 20. Specifically, the display device displays the result obtained by various kinds of processing executed by the calculation processing part 20 as text or an image. Meanwhile, the sound output device converts an audio signal including, for example, reproduced sound data or acoustic data into an analog signal, and outputs the analog signal.
The storage device 919 is a device for storing data constructed as an example of a storage part of the calculation processing part 20. The storage device 919 is constructed by, for example, a magnetic storage device such as a hard disk drive, a semiconductor storage device, an optical storage device, or a magneto-optical storage device. This storage device 919 stores, for example, a program to be executed by the CPU 901, various kinds of data, and various kinds of data acquired from the outside.
The drive 921 is a reader or writer for a recording medium, and is incorporated in the calculation processing part 20 or externally attached to the calculation processing part 20. The drive 921 reads information recorded in the set removable recording medium 927 such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM 905. Further, the drive 921 can also write information into the set removable recording medium 927 such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory. The removable recording medium 927 is, for example, DVD media, HD-DVD media, or Blu-ray (registered trademark) media. Further, the removable recording medium 927 may be, for example, CompactFlash (CF) (registered trademark), a flash memory, or a secure digital (SD) memory card. Further, the removable recording medium 927 may be, for example, an integrated circuit (IC) card or an electronic device having mounted thereon a non-contact IC chip.
The connection port 923 is a port for directly connecting a device to the calculation processing part 20. An example of the connection port 923 is a universal serial bus (USB) port, an IEEE 1394 port, or a small computer system interface (SCSI) port. Another example of the connection port 923 is an RS-232C port, an optical audio terminal, or a high-definition multimedia interface (HDMI) (registered trademark). The external connection device 929 is connected to the connection port 923 so as to cause the calculation processing part 20 to directly acquire various kinds of data from the external connection device 929, or provide the external connection device 929 with various kinds of data.
The communication device 925 is, for example, a communication interface constructed by a communication device or the like for connecting to a communication network 931. The communication device 925 is, for example, a communication card or the like for a wired or wireless local area network (LAN), Bluetooth (registered trademark), or Wireless USB (WUSB). Further, the communication device 925 may be, for example, a router for optical communication, a router for asymmetric digital subscriber line (ADSL), or a modem for various kinds of communication. This communication device 925 can transmit/receive, for example, a signal or the like to/from, for example, the Internet or other communication devices in accordance with a predetermined protocol such as TCP/IP. Further, the communication network 931 to be connected to the communication device 925 is constructed by, for example, a network connected in a wired or wireless manner, and may be, for example, the Internet, a domestic LAN, infrared communication, radio communication, or satellite communication.
In the above, an example of the hardware configuration that can realize the function of the calculation processing part 20 according to an embodiment of the present disclosure has been described. Each component described above may be constructed by using a general-purpose member or by hardware dedicated to the function of each component. Thus, the configuration to be used can be changed appropriately depending on the technological level at the time of carrying out this embodiment.
Next, description is given briefly of a flow of a method of observing an observation target object by using the observation device 1 as described above with reference to
As illustrated in
Next, the propagation distance calculation part 221 included in the calculation processing part 20 of the observation device 1 uses the acquired hologram image to calculate the propagation distance z (Step S13), and outputs the obtained result to the preprocessing part 223 and the reconstruction processing part 225. After that, the preprocessing part 223 uses the obtained hologram image and the propagation distance calculated by the propagation distance calculation part 221 to execute the series of preprocessing as described above (Step S15). Shift correction of an image based on the position of the LED is performed in such preprocessing, so that the observation method according to this embodiment can suppress with a simpler method distortion that may occur in an inline hologram when a plurality of lights having different wavelengths are used.
After that, the reconstruction processing part 225 uses the hologram image after preprocessing (preprocessed image) to execute the series of reconstruction processing as described above (Step S17). As a result, the reconstruction processing part 225 can obtain a reconstructed image (amplified image and phase image) of the focused observation target object. After the reconstruction processing part 225 calculates the reconstructed image of the focused observation target object, the reconstruction processing part 225 outputs image data of such a reconstructed image to the output control part 207.
The output control part 207 outputs the reconstructed image output from the reconstruction processing part 225 by a method specified by the user, for example, and presents the reconstructed image to the user (Step S19). As a result, the user can observe the focused observation target object.
In the above, the observation method according to this embodiment has been described briefly with reference to
In this manner, the observation device and observation method according to this embodiment provide a device that can satisfactorily observe a transparent phase object such as a cell by the hologram acquisition part constructed by an extremely small number of parts such as LEDs, an image sensor and a bandpass filter. Such a device is downsized extremely easily, and thus it is possible to arrange an observation device also in a region in which a microscope has not hitherto been able to be installed, such as an inside of a bioreactor. As a result, it is possible to obtain a phase image of a biomaterial such as a cell in a simpler manner.
Further, the observation device according to this embodiment does not waste light due to the space aperture or the like, and thus it is possible to realize an observation device including a highly efficient light source with low power consumption. Further, it is not necessary to execute complicated preprocessing by using adjacent micro LEDs having different wavelengths, which can simplify and speed up processing.
In the following, description is given briefly of the observation device and observation method according to an embodiment of the present disclosure with reference to specific images. In the example given below, an observation device having the configuration as illustrated in
The obtained results are both shown in
As can be clearly understood from comparison between
Meanwhile, as can be clearly understood from comparison between
In order to compare the results shown in
When frequencies producing the same amplitude were compared in
Meanwhile, when the observation device according to an embodiment of the present disclosure illustrated in
As can be clearly understood from those results, when the observation device according to an embodiment of the present disclosure is used, a finer frequency component is included than in the case of using a generally used LED as the coherence light source. Further, it is also understood that the observation device according to an embodiment of the present disclosure records a finer frequency component than that of the conventional method. Such a result indicates the fact that the observation device according to an embodiment of the present disclosure successfully records an inline hologram (in other words, interference of higher frequency) more accurate than that of the conventional method. This result is estimated to be caused because the light source part of the observation device according to an embodiment of the present disclosure had a smaller light emission point than that of the pinhole of the conventional method.
In the above, a preferred embodiment of the present disclosure has been described in detail with reference to the attached drawings. However, the technical scope of the present disclosure is not limited to such an example. It is clear that a person skilled in the art of the present disclosure could arrive at various kinds of change examples or modification examples within the technical idea described in the appended claims. It is understood that those change examples or modification examples also naturally fall within the technical scope of the present disclosure.
Further, an effect described in this specification is given just for explanation or as an example, and is not given in a limited manner. That is, in addition to or instead of the above-mentioned effect, the technology according to the present disclosure could exhibit other effects that are apparent for a person skilled in the art based on the description of this specification.
The following configuration also falls within the technical scope of the present disclosure.
(1)
An observation device including:
a light source part in which a plurality of light emitting diodes having different light emission wavelengths with a length of each light emission point being smaller than 100λ (λ: light emission wavelength) are arranged such that a separation distance between the adjacent light emitting diodes is equal to or smaller than 100λ (λ: light emission wavelength); and
an image sensor installed so as to be opposed to the light source part with respect to an observation target object.
(2)
The observation device according to (1), in which a length of the separation distance is equal to or smaller than five times the length of the light emission point.
(3)
The observation device according to (1) or (2), in which a bandpass filter setting a transmission wavelength band to a peak wavelength of each of the plurality of light emitting diodes is installed between the observation target object and the light source part
(4)
The observation device according to any one of (1) to (3), further including a calculation processing part for executing calculation processing for obtaining an image of the observation target object by using a photographed image for each light emission wavelength, the photographed image being generated by the image sensor, in which
the calculation processing part includes:
a preprocessing part for executing, for the photographed image for each light emission wavelength, preprocessing including at least shift correction of the image that depends on a positional relationship among the plurality of light emitting diodes; and
a reconstruction processing part for reconstructing the image of the observation target object by using the preprocessed photographed image.
(5)
The observation device according to (4), in which the preprocessing part is configured to execute the shift correction so as to cancel a positional deviation between the photographed images due to positions at which the respective light emitting diodes are installed.
(6)
The observation device according to (4) or (5), in which the preprocessing part is configured to:
select one light emitting diode serving as a reference from among the plurality of light emitting diodes; and shift spatial coordinates of the photographed images which are photographed by using the remaining light emitting diodes other than the light emitting diode serving as the reference, in a direction of the photographed image which is photographed by using the light emitting diode serving as the reference, among the plurality of light emitting diodes.
(7)
The observation device according to any one of (4) to (6), in which
the light source part includes the three light emitting diodes having different light emission wavelengths arranged in one row, and
the preprocessing part is configured to shift spatial coordinates of the photographed images which are photographed by using the light emitting diodes positioned at both ends, in a direction of the photographed image which is photographed by using the light emitting diode positioned at a center by a correction amount δ calculated by the following expression (1).
(8)
The observation device according to any one of (4) to (6), in which
the light source part includes the three light emitting diodes having different light emission wavelengths arranged in a triangle, and
the preprocessing part is configured to shift spatial coordinates of the photographed images which are photographed by using any two of the light emitting diodes, in a direction of the photographed image which is photographed by using the one remaining light emitting diode.
(9)
The observation device according to any one of (1) to (8), in which the observation target object is a biomaterial.
(10)
An observation method including:
applying light to an observation target object for each light emission wavelength by a light source part in which a plurality of light emitting diodes having different light emission wavelengths with a length of each light emission point being smaller than 100λ (λ: light emission wavelength) are arranged such that a separation distance between the adjacent light emitting diodes is equal to or smaller than 100λ (λ: light emission wavelength); and
photographing an image of the observation target object for each light emission wavelength by an image sensor installed so as to be opposed to the light source part with respect to the observation target object.
(11)
An observation system including:
a light source part in which a plurality of light emitting diodes having different light emission wavelengths with a length of each light emission point being smaller than 100λ (λ: light emission wavelength) are arranged such that a separation distance between the adjacent light emitting diodes is equal to or smaller than 100λ (λ: light emission wavelength);
an image sensor installed so as to be opposed to the light source part with respect to an observation target object; and
a calculation processing part for executing calculation processing of obtaining an image of the observation target object by using a photographed image for each light emission wavelength which is generated by the image sensor.
In the above expression (1), δ represents a correction amount, L represents a distance between the light source part and the image sensor, Z represents a distance between the observation target object and the image sensor, and p represents a distance between the light emitting diodes.
Number | Date | Country | Kind |
---|---|---|---|
2018-175058 | Sep 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/035992 | 9/12/2019 | WO | 00 |