One aspect of the embodiment relates to an image acquisition device that acquires an image of a subject.
Typically, a method of acquiring a high-resolution image of a subject has been known. For example, in W. Saafin, M. Vega, R. Molinaand A. K. Katsaggelos, “Image super-resolution from compressed sensing observations”, 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 4268-4272, a method of acquiring a high-resolution image of a subject using an image reconstruction technology called compressed sensing is disclosed. According to this method, it is possible to reconstruct an image with a finer resolution than a pixel of an image sensor, based on output of the image sensor that images the subject.
The typical method described above has tendency that accuracy of a reconstructed image is insufficient in a case where a subject having a finer structure than pixels of an image sensor is targeted. Furthermore, since an information amount from the pixels of the image sensor is small, the accuracy of the reconstructed image tends to decrease.
Therefore, an object of one aspect of the embodiment is to provide an image acquisition device capable of accurately obtaining a high-resolution image of a subject.
An image acquisition device according to one aspect of the embodiment includes a mask member that has a pattern that changes a transmittance of some beams from a subject on an intersecting plane intersecting with an incident direction of the beam, a detector including a plurality of pixels that is arranged in a direction intersecting with the incident direction of the beam and detects a beam that has passed through the mask member, and a processor that reconstructs an image using compressed sensing based on data regarding a relative moving amount between the mask member and the subject and detection signals of the plurality of pixels input from the detector, in which a size of the pattern of the mask member on the intersecting plane is smaller than a size of the plurality of pixels along the intersecting plane.
According to the one aspect, the size of the pattern of the mask member on the intersecting plane intersecting with the incident direction of the beam is set to be smaller than the size along the intersecting plane of the pixel of the detector. Then, in the processor, an image is reconstructed through compressed sensing, based on the detection signals of the plurality of pixels arranged in the direction intersecting with the incident direction and the data regarding the relative moving amount between the mask member and the subject. As a result, the information amount from the plurality of pixels can be increased, by using the detection signal detected after a transmittance in a space is finely changed. As a result, it is possible to accurately obtain a high-resolution image of the subject.
According to the present disclosure, it is possible to accurately obtain a high-resolution image of a subject.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. Note that, in the description, the same reference numeral is used for the same component or a component having the same function, and redundant description is omitted.
The detection optical system 5 includes a mask member 9, a detector 11, and a moving mechanism (conveying unit) 13. The mask member 9 is an optical member having a substantially rectangular shape that is arranged to be separated from the subject S in the z-axis direction and to face to the subject S and extends in the x-axis direction along an xy plane (intersecting plane intersecting with incident direction of electromagnetic wave). The mask member 9 has a structure that spatially modulates the electromagnetic wave that has been emitted from the detection optical system 5 and has passed through the subject S on the xy plane and transmits the electromagnetic wave. In the present embodiment, the mask member 9 has a plurality of mask pixels (pattern) 9a that is formed by two-dimensionally dividing the mask member 9 in the x-axis direction and the y-axis direction and has a square region with the same size. The size of the plurality of mask pixels 9a is larger than a wavelength of the electromagnetic wave that passes through the subject S. A transmittance of light of the plurality of mask pixels 9a is set to one of two values (specifically, 0% or 100%), and the plurality of mask pixels 9a is formed so that arrangement patterns of the two types of mask pixels 9a having different transmittances are irregular. Although the number of mask pixels 9a is not limited to a specific number, the number is, for example, 2,592 in total including the number of mask pixels 9a arranged in the x-axis direction: 81 and the number of mask pixels 9a arranged in the y-axis direction: 32.
The moving mechanism 13 includes a driving unit such as a stepping motor, a servo motor, or a geared motor therein, and has a function of linearly moving the mask member 9 in the x-axis direction. The moving mechanism 13 repeatedly moves the mask member 9 along an arrangement direction of the plurality of mask pixels 9a by a preset distance (for example, size of mask pixel 9a in x-axis direction). Furthermore, the moving mechanism 13 outputs data regarding a moving amount of the mask member 9 along the x-axis direction, to an image processing device 7.
The detector 11 is an element that is arranged to be separated from the mask member 9 in the z-axis direction and to face to the subject S and the mask member 9 and detects an intensity distribution of the electromagnetic wave that has been emitted from the detection optical system 5 and has passed through the subject S and the mask member 9. The detector 11 includes a rectangular light receiving surface 11a along the xy plane on the subject S side, and a plurality of pixels 11b that is two-dimensionally divided and arranged in the x-axis direction and the y-axis direction (direction intersecting with incident direction of electromagnetic wave) on the light receiving surface 11a and is a square region having the same size. Although the number of pixels 11b on the light receiving surface 11a is not limited to a specific number, the number is, for example, 16 in total including the number of pixels 11b arranged in the x-axis direction: 4 and the number of pixels 11b arranged in the y-axis direction: 4. Here, the size of the pixel 11b along the x-axis direction and the size of the pixel 11b along the y-axis direction are respectively larger than the size of the mask pixel 9a along the x-axis direction and the size of the mask pixel 9a along the y-axis direction, and for example, the size of the pixel 11b along the x-axis direction and the size of the pixel 11b along the y-axis direction are respectively eight times the size of the mask pixel 9a along the x-axis direction and the size of the mask pixel 9a along the y-axis direction.
The illumination optical system 3 includes a light source device (light source) 15 and a lens 17. The light source device 15 irradiates the subject S with the electromagnetic wave in the z-axis direction. The light source device 15 emits visible light, terahertz waves, infrared light, ultraviolet light, X rays, or gamma rays, as the electromagnetic wave. As the light source device 15 according to the present embodiment, a device that emits visible light is used as an example. The lens 17 converts the electromagnetic wave emitted from the light source device 15 into a parallel wave B that propagates in parallel to the z-axis direction so as to cause the electromagnetic wave enter the subject S.
The image processing device 7 is an arithmetic device that generates a high-resolution image of the subject S. The image processing device 7 is physically a computer or the like that includes a central processing unit (CPU) which is a processor, a random access memory (RAM) or a read only memory (ROM) which is a recording medium, a communication module, an input/output module, or the like. Note that the image processing device 7 may include a display, a keyboard, a mouse, a touch panel display, or the like as an input/output device or may include a data recording device such as a hard disk drive or a semiconductor memory. Furthermore, the image processing device 7 may include a plurality of computers. The image processing device 7 is configured to be able to receive data using wired communication or wireless communication from the detector 11 and the moving mechanism 13. Hereinafter, functions of the image processing device 7 will be described in detail.
The image processing device 7 receives input of detection signals O of the plurality of pixels 11b from the detector 11, at each timing when the mask member 9 is repeatedly moved by the moving mechanism 13 about K times (K is arbitrary integer), from the detector 11 and acquires data regarding a relative moving amount between the mask member 9 and the subject S at the times of first to K-th measurements. The data regarding the relative moving amount may be set in the image processing device 7 in advance or may be input from the moving mechanism 13. The image processing device 7 generates a reconstructed image that is a high-resolution image by reconstructing an image using compressed sensing, using the detection signals O1 to OK from the first time to the K-th time and the data regarding the relative moving amount from the first time to the K-th time.
Here, a function of the image processing device 7 in a case where the number of pixels of the detector 11 is set to be M: x-axis direction and N: y-axis direction (M and N are positive integers) and a reconstructed image including S=P×M pixels: x-axis direction and T=Q×N pixels: y-axis direction (P and Q are positive integers) is generated will be described. In this case, the reconstructed image is an image obtained by subdividing each pixel 11b of the detector 11 by P regions in the x-axis direction and Q regions in the y-axis direction. In the following description, a region obtained by dividing the plurality of pixels 11b is indicated by (m, n; p, q). This expression indicates a region in a p-th column and a q-th row in a square region obtained by dividing an m-th (also referred to as m-th column) pixel in the x-axis direction that is an n-th (also referred to as n-th row) pixel in the y-axis direction. Although the image processing device 7 has a function of generating a reconstructed image on the premise that the detector 11 detects the parallel wave, the image processing device 7 can similarly generate a reconstructed image by considering a magnification ratio estimated from a distance between components in a case where the illumination optical system 3 emits spherical waves.
In
Here, Ωmask in the above formula means a region (plane) representing where in the mask member 9 the light entering the region (m, n: p, q) of the detector 11 passes through. The above formula (1) represents a value obtained by dividing a value of a surface integral of the distribution T (x, y) for all points (x, y) in Ωmask by an area of Ωmask. In other words, T1 (m, n; p, q) is an average value of a spatial transmittance of the region of the mask member that transmits the light entering the region (m, n: p, q) of the detector 11. Furthermore, a light intensity Iobs (m, n: p, q) of light that passes through the region fobs (m, n: p, q) on the subject S is expressed by the following formula (2) using a light intensity distribution Iobs (x, y) on the xy plane of light that passes through the subject S;
In
Furthermore, the light intensity Iobs (m, n: p, q) of the light that passes through the region Ωobs (m, n: p, q) on the subject S is expressed by the following formula (4) using the light intensity distribution Iobs (x, y) on the xy plane of the light that passes through the subject S;
The detection signal Ok (m, n) of the pixel 11b in the m-th column and the n-th row of the detector 11 at the time of the k-th measurement is expressed by the following formula (5) as a sum of light intensities of light entering a divided region in the pixel;
When a one-dimensional vector in which the detection signals Ok (m, n) obtained by K times of measurements are arranged is assumed as a measurement vector y and a one-dimensional vector in which the light intensities Iobs (m, n: p, q) are arranged in order is assumed as a reconstructed image vector x, each element of the measurement vector y can be expressed as a linear sum of elements of the reconstructed image vector x from the above formula (5). Therefore, the measurement vector y can be expressed by the following formula (6) by using a transformation matrix D;
By using the above theory, the image processing device 7 obtains the transformation matrix D necessary for calculation of image reconstruction, by referring to the transmittance distribution T (x, y) of the mask that is a known value and the data regarding the relative moving amount between the mask member 9 and the subject S at each measurement. That is, an arrangement of the measurement vector y is set so that an i-th element yi of the measurement vector y become Oj (m, n) when i=(k−1) NM+(n−1) M+m. Furthermore, an arrangement of the reconstructed image vector x is set so that an i-th element xi of the reconstructed image vector x become Iobs (m, n: p, q) when i={(n−1) Q+q−1} MP+(m−1) P+p. The image processing device 7 calculates an element Di,j in an i-th column and a j-th row of the transformation matrix D that is a two dimensional matrix including NMPQ columns and KNM rows, using values ni, mi, pi, and qi that satisfy i={(ni−1) Q+qi−1} MP+(mi−1) P+pi and values kj, nj, and mj that satisfy j=(kj−1) NM+(nj−1) M+mj, according to the following formula (7);
By repeating the above calculation, the image processing device 7 can obtain the transformation matrix D. δn,m is a function that is called Kronecker delta (each of n and m is natural number), satisfies δn,m=1 in a case where n=m, and satisfies δn,m=0 in a case where n≠m.
Moreover, the image processing device 7 calculates the reconstructed image vector x, using the calculated transformation matrix D and the measurement vector y. In general, in a case where the reconstructed image vector x can be described as a linear sum as indicated by the following formula (8);
using a basis vi and a coefficient ai, a one-dimensional vector a in which coefficients ai are arranged is expressed by the following formula (9)
Alternatively, the image processing device 7 may calculate the reconstructed image vector x, by solving the generalized 11 optimization problem below.
For example, the image processing device 7 can calculate the reconstructed image vector x, using an alternating direction method of multipliers (ADMM) that is one type of compressed sensing algorithm described in Japanese Patent No. 7063378.
The image processing device 7 converts the reconstructed image vector x reconstructed as described above into a two-dimensional reconstructed image. Then, the image processing device 7 outputs the two-dimensional reconstructed image to an output device such as a display or a touch panel display.
Workings and effects of the present embodiment will be described.
In the image acquisition device 1, the size of the mask pixel 9a of the mask member 9 on the xy plane intersecting with the incident direction of the parallel wave B is set to be smaller than the size of the pixel 11b of the detector 11 along the xy plane. Then, in the image processing device 7, a high-resolution image is reconstructed through compressed sensing, based on the detection signals O of the plurality of pixels 11b arranged in a direction intersecting with the incident direction and the data regarding the relative moving amount between the mask member 9 and the subject S. As a result, by using the detection signal O detected after the transmittance in the space is finely changed, an information amount from the plurality of pixels 11b can be increased. As a result, it is possible to accurately obtain the high-resolution image of the subject S.
Furthermore, the image processing device 7 reconstructs an image based on the plurality of the detection signals O1 to OK of the plurality of pixels 11b obtained when the relative moving amount changes. In this case, the image can be reconstructed based on the plurality of detection signals O1 to OK obtained while changing the pattern of the change in the transmittance in the space, and it is possible to further improve the high-resolution image of the subject S.
Furthermore, the image acquisition device 1 further includes the moving mechanism 13 that moves the mask member 9 with respect to the subject S and outputs the data regarding the moving amount to the image processing device 7. In this case, it is possible to accurately obtain the high-resolution image of the subject S with a compact device configuration.
Furthermore, the image acquisition device 1 further includes the light source device 15 that emits electromagnetic wave, and the size of the mask pixel 9a of the mask member 9 on the xy plane is set to be larger than the wavelength of the electromagnetic wave. With such a configuration, it is possible to accurately obtain the high-resolution image of the subject S using the detection signal O of the subject S obtained by irradiating the electromagnetic wave.
Furthermore, the detector 11 includes the plurality of pixels 11b that is two-dimensionally arranged. In this case, it is possible to efficiently obtain a high-resolution image with high accuracy in a wide range.
A second embodiment of the present invention will be described.
The conveyance mechanism 13A includes a driving unit such as a stepping motor and a belt portion 19 including an arrangement surface where a subject S is arranged, and has a function of linearly moving the subject S in an x-axis direction with respect to the mask member 9. The conveyance mechanism 13A repeatedly moves the subject S along an arrangement direction of a plurality of mask pixels 9a, in a preset distance.
Furthermore, the conveyance mechanism 13A outputs data regarding a moving amount of the subject S along the x-axis direction, to an image processing device 7.
The image processing device 7 receives input of a detection signal O from the detector 11 at each of timings when the subject S is repeatedly moved by the conveyance mechanism 13A about K times, from the detector 11 and acquires data regarding a relative moving amount between the mask member 9 and the subject S at the times of first to K-th measurements. The data regarding the relative moving amount may be set in the image processing device 7 in advance or may be input from the conveyance mechanism 13A.
In
The image processing device 7 generates the reconstructed image as follows. Although the image processing device 7 has a function of generating a reconstructed image on the premise that the detector 11 detects the parallel wave, the image processing device 7 can similarly generate a reconstructed image by considering a magnification ratio estimated from a distance between components in a case where the illumination optical system 3 emits spherical waves.
In
T (m, n; p, q) is an average value of a spatial transmittance of the region of the mask member that transmits the light entering the region (m, n: p, q) of the detector 11. Furthermore, a light intensity Iobs,1 (m, n: p, q) of light that passes through the region Ωobs (m, n: p, q) on the subject S at the first measurement is expressed by the following formula (11) using the light intensity distribution Iobs (x, y) on the xy plane of the light that passes through the subject S;
In
A detection signal Ok (m, n) of the pixel 11b in the m-th column and the n-th row of the detector 11 at the time of the k-th measurement is expressed by the following formula (13) as a sum of light intensities of light entering a divided region in the pixel;
When a one-dimensional vector in which the detection signals Ok (m, n) obtained by K times of measurements are arranged in order is assumed as a measurement vector y and a one-dimensional vector in which the light intensities Iobs,k (m, n: p, q) are arranged in order for each row as one-dimensional data is assumed as a reconstructed image vector x, each element of the measurement vector y can be expressed as a linear sum of elements of the reconstructed image vector x from the above formula (13). Therefore, the measurement vector y can be expressed by the above formula (6) by using a transformation matrix D.
After obtaining the transformation matrix D as in the first embodiment, using the above theory, the image processing device 7 calculates the reconstructed image vector x using the transformation matrix D and the measurement vector y. Then, the image processing device 7 outputs the two-dimensional reconstructed image to an output device.
According to the image acquisition device 1A according to the second embodiment, it is possible to accurately obtain a high-resolution image of the subject S. In particular, the conveyance mechanism 13A for moving the subject S with respect to the mask member 9 is included. In a case of such a configuration, a size of a detection optical system 5 including the detector 11 can be reduced, regardless of a size of the subject S.
The various embodiments of the present invention have been described above. However, the present invention is not limited to the above embodiments and may be modified or applied to other objects without changing the gist described in each claim.
According to such a first modification, it is possible to enlarge the parallel wave B from the subject S and detect the parallel wave B by the detector 11, and it is possible to enlarge and obtain the high-resolution image of the subject S. Furthermore, it is possible to obtain the high-resolution image of the subject S even in a case where a distance between the detector 11 and the subject S is increased, and a degree of freedom of inspection of the subject S is improved. Note that the lens 21 may be added to the image acquisition device 1 according to the first embodiment.
A conveyance mechanism 13A of an image acquisition device 1C has a function of linearly moving the subject S in the x-axis direction at a constant speed v. When measurement is started and conveyance of the subject S is started, the detector 113 repeatedly generates the detection signal O at a timing of a plurality of times t3 when the subject S moves within a range detectable by the detector 113. On the other hand, the detector 112 repeatedly generates the detection signal O at a timing of a plurality of times t2=t3−d2,3/v, and the detector 112 repeatedly generates the detection signal O at a timing of a plurality of times t1=t2−d1,2/v.
The image processing device 7 of the image acquisition device 1C generates a reconstructed image, as in the second embodiment, using the detection signals O by the plurality of times of measurement output from the three detectors 111, 112, and 113 and data regarding relative moving amounts between the mask members 91, 92, and 93 and the subject S used at the time of each measurement.
According to the image acquisition device 1C with the above configuration, it is possible to efficiently acquire the plurality of detection signals O in a case where the pattern of the change of the transmittance in the space is changed, and a measurement time can be shortened.
As the detector 11, various types of detectors are used. For example, a bolometer array, a superconducting single photon detector (SSPD) array, an infrared radiation sensor array, a multi-pixel photon counter (MPPC), a photomultiplier tube (PMT) array, a phototube array, an avalanche photodiode (APD) array, a single photon avalanche diode (SPAD), a time of flight (ToF) sensor, a ultraviolet sensor array, a radiation sensor array, an electron/ion sensor array, a stripe detector, a photodiode (PD) array bonding a scintillator, or the like is used.
Furthermore, the electromagnetic wave used to measure the subject S may be a terahertz wave, an infrared ray, an ultraviolet ray, an X-ray, or a gamma ray, in addition to visible light. Furthermore, the electromagnetic wave may be a particle beam of electrons, protons, neutrons, or the like. The mask member 9 used in that case is configured to be able to sufficiently modulate a transmission intensity of the electromagnetic wave or the particle beam of each wavelength. For example, in a case where the X-ray is used, the mask member 9 is formed by forming a hole in a metal plate of lead, iron, or the like having a thickness sufficient to shield the X-ray. In a case where the terahertz wave is used, the mask member 9 is formed by forming a hole in a thin metal film or polychlorinated biphenyl (PCB), for example.
In a case where the particle beam or the electromagnetic wave that can be measured is different from the particle beam or the electromagnetic wave that can be observed, the detector 11 may include a portion that converts the particle beam or the electromagnetic wave. For example, in the first and second embodiments and the first and second modifications, a conversion unit having a finer structure than the pixel 11b may be provided in the detector 11, instead of the mask member 9, and the conversion unit may have a function same as the mask member 9. An X-ray scintillator may be provided as the conversion unit, and structures obtained by making a scintillator be finer may be randomly arranged in the detector 11, and a part of the scintillator may be spatially transformed through laser processing. Furthermore, a configuration may be used in which the conversion unit having such a configuration is used in combination with the mask member 9.
A spatial modulation pattern of the mask pixel 9a of the mask member 9 may be irregular as a random pattern or the like or may have regularity as a Hadamard matrix.
The image processing device 7 may reconstruct an image by applying sparsity to the entire image of the subject S, or may increase a resolution of only a part of the image by reconstructing the image by applying the sparsity to only a region of interest across a plurality of pixels.
The mask member 9 may have a configuration to be a modulation pattern different for each pixel 11b. In this case, between the mask member 9 and the subject S, movement may be performed in subpixel units or in units of one or more pixels. Furthermore, a moving direction may be the x-axis direction, or the y-axis direction, or both directions. Furthermore, the mask member 9 may have a configuration having the same modulation pattern between the pixels 11b. In this case, at the time of measurement, movement between the mask member 9 and the subject S may be performed in subpixel units. At this time, the moving direction may be the x-axis direction, or the y-axis direction, or both directions.
The mask member 9 is not limited to have a configuration of which the spatial transmittance changes between two values and may be a configuration of which the spatial transmittance changes between three or more values. In this case, a configuration that has a modulation pattern different for each pixel 11b may be used, and a configuration that has a modulation pattern same between the pixels 11b may be used.
In the first embodiment described above, a configuration in which the plurality of mask members 9 is arranged to be overlapped may be adopted. The spatial modulation patterns of the plurality of mask members 9 may be the same or different. In this case, the moving mechanism 13 moves one or more mask members 9 among the plurality of mask members 9. Furthermore, a configuration combined with the conveyance mechanism 13A that moves the subject S may be used.
The detector 11 is not limited to have the configuration having the pixels 11b two-dimensionally arranged and may be a one-dimensional line sensor having a plurality of pixels that is one-dimensionally arranged. For example, the detector 11 may have a configuration in which the plurality of pixels is arranged along the y-axis direction, a configuration in which the plurality of pixels is arranged along the x-axis direction (moving direction of mask member 9 or subject S), or a configuration in which the plurality of pixels is arranged along a direction inclined with respect to the x-axis direction and the y-axis direction.
In the embodiments described above, it is preferable for the processor to reconstruct an image based on the plurality of the detection signals of the plurality of pixels obtained when the relative moving amount changes. In this case, the image can be reconstructed based on the plurality of detection signals obtained while changing the pattern of the change in the transmittance in the space, and it is possible to further improve the accuracy of the high-resolution image of the subject.
Furthermore, in the embodiments described above, it is preferable to further include the moving mechanism that moves the mask member with respect to the subject and outputs the data regarding the moving amount to the processor. In this case, it is possible to accurately obtain the high-resolution image of the subject with a compact device configuration.
Furthermore, in the above embodiments, it is preferable to further include the conveyance mechanism that moves the subject with respect to the mask member. In this case, a size of the optical system including the detector can be reduced, regardless of the size of the subject.
Furthermore, in the embodiments described above, it is preferable to further include a light source that emits electromagnetic waves as a beam, and it is preferable that the size of the pattern of the mask member on the intersecting plane be larger than the wavelength of the electromagnetic wave. With such a configuration, it is possible to accurately obtain the high-resolution image of the subject using the detection signal of the subject obtained by irradiating the electromagnetic wave.
Furthermore, in addition, in the embodiments described above, it is preferable to further include the lens member that is arranged between the subject and the detector and forms a image on the detector using the beam from the subject. In this way, even if the distance between the subject and the detector is long, it is possible to enlarge the beam from the subject and to detect the beam by the detector, and it is possible to accurately obtain the high-resolution image of the subject.
Furthermore, in addition, in the embodiments described above, it is preferable to include the plurality of pairs of the mask members and the detectors. With such a configuration, the plurality of detection signals in a case where the pattern of the change of the transmittance in the space is changed can be efficiently acquired.
Furthermore, in the embodiments described above, it is preferable for the detector to include the plurality of pixels that is two-dimensionally arranged. In this case, it is possible to efficiently obtain a high-resolution image with high accuracy in a wide range.
Moreover, in the embodiments described above, it is preferable for the processor to apply the sparsity only to the region of interest across the plurality of pixels and reconstruct an image. In this case, it is possible to efficiently obtain a high-resolution image with high accuracy within a limited range.
The image acquisition device according to the embodiment is [1] “an image acquisition device including a mask member that has a pattern that changes a transmittance of some beams from a subject on an intersecting plane intersecting with an incident direction of the beam, a detector including a plurality of pixels that is arranged in a direction intersecting with the incident direction of the beam and detects a beam that has passed through the mask member, and a processor that reconstructs an image using compressed sensing based on data regarding a relative moving amount between the mask member and the subject and detection signals of the plurality of pixels input from the detector, in which a size of the pattern of the mask member on the intersecting plane is smaller than a size of the plurality of pixels along the intersecting plane”.
The image acquisition device according to the embodiment may be [2] “the image acquisition device according to [1], in which the processor reconstructs the image based on the plurality of the detection signals of the plurality of pixels obtained when the relative moving amount changes”.
The image acquisition device according to the embodiment may be [3] “the image acquisition device according to [1] or [2], further including a moving mechanism that moves the mask member with respect to the subject and outputs data regarding the moving amount to the processor”.
The image acquisition device according to the embodiment may be [4] “the image acquisition device according to [1] or [2], further including a conveyance mechanism that moves the subject with respect to the mask member”.
The image acquisition device according to the embodiment may be [5] “the image acquisition device according to any one of [1] to [4], further including a light source that emits an electromagnetic wave as the beam, in which the size of the pattern of the mask member on the intersecting plane is larger than a wavelength of the electromagnetic wave”.
The image acquisition device according to the embodiment may be [6] “the image acquisition device according to any one of [1] to [5], further including a lens member that is arranged between the subject and the detector and forms an image of the beam from the subject on the detector”.
The image acquisition device according to the embodiment may be [7] “the image acquisition device according to any one of [1] to [6], including a plurality of pairs of the mask members and the detectors”.
The image acquisition device according to the embodiment may be [8] “the image acquisition device according to any one of [1] to [7], in which the detector includes the plurality of pixels that is two-dimensionally arranged”.
The image acquisition device according to the embodiment may be [9] “the image acquisition device according to any one of [1] to [8], in which the processor applies sparsity only to a region of interest across a plurality of pixels and reconstructs an image”.
Number | Date | Country | Kind |
---|---|---|---|
2023-003715 | Jan 2023 | JP | national |