This application claims priority from Korean Patent Application No. 10-2019-0133269, filed on Oct. 24, 2019 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
Example embodiments of the present disclosure relate to a hyperspectral image sensor and a hyperspectral image pickup apparatus including the hyperspectral image sensor, and more particularly, to a miniaturized hyperspectral image sensor, which has a small size by arranging a dispersion optical device in an image sensor, and a hyperspectral image pickup apparatus including the miniaturized hyperspectral image sensor.
Hyperspectral imaging is a technique for simultaneously analyzing an image of an object and measuring a continuous light spectrum for each point in the image. In hyperspectral imaging technique, the light spectrum of each portion of an object may be more quickly measured compared to existing spot spectroscopy. Because each pixel in an image of an object contains spectral information, various applications of remotely capturing an image of an object and determining the properties and characteristics of the object may be implemented. For example, hyperspectral imaging may be used for ground surveying using drones, satellites, aircraft, etc., analyzing agricultural site conditions, mineral distribution, surface vegetation, and pollution levels, etc. In addition, use of hyperspectral imaging in various fields such as food safety, skin/face analysis, authentication recognition, and biological tissue analysis has been investigated.
In hyperspectral imaging, light passing through a narrow aperture, like in a point scan method (i.e., whisker-broom method) or a line scan method (i.e., push-broom method), is dispersed on a grid or the like to simultaneously obtain an image of an object and a spectrum. Recently, a snapshot method of combining a band pass filter array or a tunable filter with an image sensor and simultaneously capturing images for wavelength bands has also been introduced.
However, when the point scan method or the line scan method is used, it is difficult to miniaturize an image pickup apparatus because a mechanical configuration for scanning an aperture is required. When the snapshot method is used, the measurement time is long and the resolution of the image is lowered.
One or more example embodiments provide miniaturized hyperspectral image sensors.
One or more example embodiments also provide miniaturized hyperspectral image pickup apparatuses including miniaturized hyperspectral image sensors.
Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of example embodiments of the disclosure.
According to an aspect of an example embodiment, there is provided a hyperspectral image sensor including a solid-state imaging device including a plurality of pixels disposed two-dimensionally, and configured to sense light, and a dispersion optical device disposed to face the solid-state imaging device at an interval, and configured to cause chromatic dispersion of incident light such that the incident light is separated based on wavelengths of the incident light and is incident on different positions, respectively, on a light sensing surface of the solid-state imaging device.
The hyperspectral image sensor may further include a transparent spacer disposed on the light sensing surface of the solid-state imaging device, wherein the dispersion optical device is disposed on an upper surface of the transparent spacer opposite to the solid-state imaging device.
The dispersion optical device may include a periodic grating structure or an aperiodic grating structure that is configured to cause chromatic dispersion or a one-dimensional structure, a two-dimensional structure, or three-dimensional structure including materials having different refractive indices.
The size of the dispersion optical device may correspond to all of the plurality of pixels of the solid-state imaging device.
The dispersion optical device may be configured to cause chromatic dispersion and focus the incident light on the solid-state imaging device.
The hyperspectral image sensor may further include a spacer disposed on an upper surface of the dispersion optical device, and a planar lens disposed on an upper surface of the spacer, wherein the planar lens is configured to focus incident light on the solid-state imaging device.
According to another aspect of an example embodiment, there is provided a hyperspectral image pickup apparatus including a solid-state imaging device including a plurality of pixels disposed two-dimensionally and configured to sense light, a dispersion optical device disposed to face the solid-state imaging device at an interval, and configured to cause chromatic dispersion of incident light such that the incident light is separated based a plurality of wavelengths of the incident light and is incident at different positions, respectively, on a light sensing surface of the solid-state imaging device, and an image processor configured to process image data provided from the solid-state imaging device to extract hyperspectral images for the plurality of wavelengths.
The hyperspectral image pickup apparatus may further include a transparent spacer disposed on the light sensing surface of the solid-state imaging device, wherein the dispersion optical device is disposed on an upper surface of the transparent spacer opposite to the solid-state imaging device.
The dispersion optical device may include a periodic grating structure or an aperiodic grating structure, or a one-dimensional structure, a two-dimensional structure, or a three-dimensional structure including materials having different refractive indices.
The size of the dispersion optical device may correspond to all of the plurality of pixels of the solid-state imaging device.
The hyperspectral image pickup apparatus may further include an objective lens configured to focus incident light on the light sensing surface of the solid-state imaging device.
The dispersion optical device may be configured to cause chromatic dispersion and focus incident light on the solid-state imaging device.
The hyperspectral image pickup apparatus may further include a spacer disposed on an upper surface of the dispersion optical device, and a planar lens disposed on an upper surface of the spacer, wherein the planar lens is configured to focus incident light on the solid-state imaging device.
The image processor may be further configured to extract a hyperspectral image based on the image data provided from the solid-state imaging device and a point spread function previously calculated for each of the plurality of wavelengths.
The image processor may be further configured to extract edge information without dispersion through edge reconstruction of a dispersed RGB image input from the solid-state imaging device, obtain spectral information in a gradient domain based on dispersion of the extracted edge information, and reconstruct a hyperspectral image based on spectral information of gradients.
The image processor may be further configured to obtain a spatially aligned hyperspectral image ialigned by solving a convex optimization problem by:
where Ω is a response characteristic of the solid-state imaging device, ϕ is a point spread function, j is dispersed RGB image data input from the solid-state imaging device, i is a vectorized hyperspectral image, ∇xy is a spatial gradient operator, and ∇λ is a spectral gradient operator.
The image processor may be further configured to solve the convex optimization problem based on an alternating direction method of multipliers (ADMM) algorithm.
The image processor may be further configured to reconstruct the spectral information from data of the spatially aligned hyperspectral image by solving an optimization problem to extract a stack ĝxy of spatial gradients for each wavelength by:
where gxy is a spatial gradient close to a spatial gradient ∇xyj of an image in the solid-state imaging device.
The image processor may be further configured to reconstruct a hyperspectral image iopt from the stack of spatial gradients by solving an optimization problem by:
where Δλ is a Laplacian operator for a spectral image i along a spectral axis, and Wxy is an element-wise weighting matrix that determines the confidence level of gradients estimated in the previous stage.
The image processor may further include a neural network structure configured to repeatedly perform an optimization process by using a gradient descent method by:
where I(I) and V(I) are solutions for l-th HQS iteration, a condition
The neural network structure of the image processor may be further configured to receive image data J from the solid-state imaging device, obtain an initial value I(0) of a hyperspectral image based on the image data J, iteratively perform the optimization process with respect to the equation based on a gradient descent method, and output a final hyperspectral image based on the iterative optimization process.
The image processor may be further configured to obtain a prior term, which is the third term of the equation, by using a neural network.
The neural network may include a U-net neural network.
The neural network may further include an encoder including a plurality of pairs of a convolution layer and a pooling layer, and a decoder including a plurality of pairs of an up-sampling layer and a convolution layer, wherein a number of pairs of the up-sampling layer and the convolution layer of the decoder is equal to a number of pairs of the convolution layer and the pooling layer of the encoder, and wherein a skip connection method is applied between the convolution layer of the encoder and the convolution layer of the decoder, which have a same data size.
The neural network may further include an output layer configured to perform soft thresholding, based on an activation function, on the output of the decoder.
According to another aspect of an example embodiment, there is provided a hyperspectral image pickup apparatus including a solid-state imaging device including a plurality of pixels disposed two-dimensionally and configured to sense light, a first spacer disposed on the light sensing surface of the solid-state imaging device, a dispersion optical device disposed to face the solid-state imaging device at an interval, and configured to cause chromatic dispersion of incident light such that the incident light is separated based a plurality of wavelengths of the incident light and is incident at different positions, respectively, on a light sensing surface of the solid-state imaging device, the dispersion optical device being disposed on an upper surface of the first spacer opposite to the solid-state imaging device, a second spacer disposed on an upper surface of the dispersion optical device, a planar lens disposed on an upper surface of the second spacer, the planar lens being configured to focus incident light on the solid-state imaging device; and an image processor configured to process image data provided from the solid-state imaging device to extract hyperspectral images for the plurality of wavelengths.
The above and/or other aspects, features, and advantages of example embodiments will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
Reference will now be made in detail to example embodiments of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the example embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the example embodiments are merely described below, by referring to the figures, to explain aspects. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. For example, the expression, “at least one of a, b, and c,” should be understood as including only a, only b, only c, both a and b, both a and c, both b and c, or all of a, b, and c.
Hereinafter, a hyperspectral image sensor and a hyperspectral image pickup apparatus including the hyperspectral image sensor will be described in detail with reference to the accompanying drawings. In the following drawings, the size of each layer illustrated in the drawings may be exaggerated for convenience of explanation and clarity. Furthermore, the example embodiments are merely described below, by referring to the figures, to explain aspects of the present description, and the example embodiments may have different forms. In the layer structure described below, when a constituent element is disposed “above” or “on” to another constituent element, the constituent element may include not only an element directly contacting on the upper/lower/left/right sides of the other constituent element, but also an element disposed above/under/left/right the other constituent element in a non-contact manner.
The solid-state imaging device 111 senses light and is configured to convert the intensity of incident light into an electrical signal. The solid-state imaging device 111 may be a general image sensor including a plurality of pixels arranged in two dimensions to sense light. For example, the solid-state imaging device 111 may include a charge coupled device (CCD) image sensor or a complementary metal oxide semiconductor (CMOS) image sensor.
The spacer 112 disposed on a light incident surface of the solid-state imaging device 111 provides a constant gap between the dispersion optical device 113 and the solid-state imaging device 111. The spacer 112 may include a transparent dielectric material such as silicon oxide (SiO2), silicon nitride (SiNx), or hafnium oxide (HfO2), or may include a transparent polymer material such as polymethly methacrylate (PMMA) or polyimide (PI). In addition, the spacer 112 may include air if there is a support structure for maintaining a constant gap between the dispersion optical device 113 and the solid-state imaging device 111.
The dispersion optical device 113 disposed on an upper surface of the spacer 112 is configured to intentionally cause chromatic dispersion. For example, the dispersion optical device 113 may include a periodic one-dimensional grating structure or two-dimensional grating structure configured to have chromatic dispersion characteristics. The dispersion optical device 113 may be configured in various patterns. For example,
The dispersion optical devices 113 may be disposed to face the entire area of the solid-state imaging device 111 at a regular interval. For example, the size of the dispersion optical device 113 may be selected to entirely cover an effective area in which a plurality of pixels of the solid-state imaging device 111 are arranged. The dispersion optical device 113 may have the same dispersion characteristic in the entire area of the dispersion optical device 113. However, embodiments are not limited thereto. For example, the dispersion optical device 113 may have a plurality of areas having different dispersion characteristics. For example, the dispersion optical device 113 may have at least two areas having different dispersion angles for the same wavelength of light.
The solid-state imaging device 111 of the hyperspectral image sensor 110 may be disposed on the focal plane of the objective lens 101. In addition, the hyperspectral image sensor 110 may be disposed such that the dispersion optical device 113 faces the objective lens 101. The hyperspectral image sensor 110 may be disposed such that the dispersion optical device 113 is positioned between the solid-state imaging device 111 and the objective lens 101. Then, the objective lens 101 may focus incident light L on a light sensing surface of the solid-state imaging device 111. In this case, the incident light L that passes through the objective lens 101 and enters the hyperspectral image sensor 110 is separated for each wavelength by the dispersion optical device 113. Light λ1, λ2, and λ3 separated for each wavelength pass through the spacer 112 and are incident on different positions on the light sensing surface of the solid-state imaging device 111.
For example,
The difference in the number of pixels between the positions of the a first image L1, a second image L2, and a third image L3 and the position of the reference image L0 on the solid-state imaging device 111 may be determined by a diffraction angle for each wavelength of light by the dispersion optical device 113, the thickness of the spacer 112, the pixel pitch of the solid-state imaging device 111, and the like. For example, as the diffraction angle by the dispersion optical device 113 increases, the thickness of the spacer 112 increases, or the pixel pitch of the solid-state imaging device 111 decreases, the difference in the number of pixels between the positions of the a first image L1, a second image L2, and a third image L3 and the position of the reference image L0 may increase. The diffraction angle for each wavelength of light by the dispersion optical device 113, the thickness of the spacer 112, and the pixel pitch of the solid-state imaging device 111 are values that may be known in advance through measurement.
Therefore, when the a first image L1, a second image L2, and a third image L3 are detected on the solid-state imaging device 111, an image for each spectrum, that is, a hyperspectral image may be extracted in consideration of the diffraction angle for each wavelength of light by the dispersion optical device 113, the thickness of the spacer 112, and the pixel pitch of the solid-state imaging device 111. For example, the image processor 120 may process image data provided from the solid-state imaging device 111, thereby extracting the first image L1 formed by light having the first wavelength, the second image L2 formed by light having the second wavelength, and the third image L3 formed by light having the third wavelength. Although only images for three wavelengths are shown in
In
Hereinafter, a process of calculating a hyperspectral image by using an image obtained from the solid-state imaging device 111 will be described in more detail.
Referring to
Therefore, image data obtained by the solid-state imaging device 111 may be expressed by Equation 1 below.
j(p,c)=∫Ω(c,λ)I(Φλ(p),λ)dλ [Equation 1]
Here, J(p, c) denotes linear RGB image data obtained by the solid-state imaging device 111, c denotes an RGB channel, and Ω(c, λ) denotes a transfer function obtained by coding the response of the solid-state imaging device 111 to the channel c and the wavelength λ. In addition, ϕλ(p) denotes a nonlinear dispersion spatially changed by the dispersion optical device 113 and is modeled as a shift operator at each pixel p for each wavelength λ. Rewriting this model in the form of a matrix-vector gives Equation 2 below.
j=ΩΦi [Equation 2]
Here, j denotes a vectorized linear RGB image, i denotes a vectorized hyperspectral image, and Ω denotes an operator for converting spectral information into RGB. ϕ denotes a matrix representing the direction and magnitude of dispersion per pixel. Therefore,
In the example embodiment, j may be obtained through the solid-state imaging device 111, and Ω may be obtained from the response characteristics of the solid-state imaging device 111, that is, the optical characteristics of a color filter of the solid-state imaging device 111 and the response characteristics of a photosensitive layer of the solid-state imaging device 111. ϕ may also be obtained from a point spread function for the optical path from one point on the object to the solid-state imaging device 111 through the objective lens 101, the dispersion optical device 113, and the spacer 112. Therefore, in consideration of Ω and ϕ, a hyperspectral image for each wavelength may be calculated using an RGB image obtained from the solid-state imaging device 111.
However, input data for obtaining the hyperspectral image is only the RGB image obtained through the solid-state imaging device 111. The RGB image obtained through the solid-state imaging device 111 includes only superimposed dispersion information and spectral signatures at edges of the image. Therefore, reconstructing the hyperspectral image may be a problem for which a plurality of solutions may exist. In order to solve this problem, according to the example embodiment, first, clear edge information without dispersion may be obtained through edge reconstruction of an input dispersion RGB image, and next, spectral information may be calculated in a gradient domain by using the dispersion of extracted edges, and finally, a hyperspectral image may be reconstructed using sparse spectral information of gradients.
For example, by solving a convex optimization problem as shown in Equation 3 below, a spatially aligned hyperspectral image ialigned may be calculated from an input dispersion RGB image j.
Here, ∇xy denotes a spatial gradient operator and ∇λ denotes a spectral gradient operator. α1 and β1 denote coefficients. The first term of Equation 3 denotes a data residual of an image formation model shown in Equation 2, and the remaining term is a prior term. A first prior term is a traditional total variation (TV) term that ensures the sparsity of spatial gradients, and a second prior term is a cross-channel term. The cross-channel term is used to calculate the difference between unnormalized gradient values of adjacent spectral channels, assuming that spectral signals are locally smooth in adjacent channels. Therefore, spatial alignment between the spectral channels may be obtained using the cross-channel term. Equation 3 may be solved through, for example, L1 regularization or L2 regularization using an alternating direction method of multipliers (ADMM) algorithm.
Using this method, a hyperspectral image without edge dispersion may be obtained. However, even in this case, aligned spectral information in the spatially aligned hyperspectral image ialigned may not be completely accurate. In order to locate the edge more accurately, a multi-scale edge detection algorithm may be applied after projecting the aligned hyperspectral image ialigned onto an RGB channel via a transfer function Ωaligned, instead of applying an edge detection algorithm directly to spectral channels in the aligned hyperspectral image ialigned.
The extracted edge information may be used to reconstruct spectral information. In an image without dispersion, spectral information is directly projected to RGB values, and thus, a spectrum may not be traced back from a given input. However, when there is dispersion, information about spectral intensity distribution along the edge may be obtained using a spatial gradient. Therefore, in order to reconstruct the spectral information, spatial gradients in dispersed areas near the edge may be considered. First, a spatial gradient gxy close to spatial gradients ∇xyj of an image obtained by the solid-state imaging device 111 may be found, and a stack ĝxy of spatial gradients for each wavelength may be calculated as in Equation 4 below.
Here, α2 and β2 denote coefficients. The first term of Equation 4 is a data term representing an image formation model of Equation 1 in a gradient domain, and the remaining two terms are prior terms relating to gradients. A first prior term is equivalent to the spectral sparsity of gradients used in the spatial alignment stage of Equation 3, and enforces sparse changes of the gradients along a spectral dimension. A second prior term imposes smooth changes of the gradients in a spatial domain to remove artifacts of the image.
If a spectral signature exists only along edges, the optimization problem of Equation 4 may be solved considering only the pixels of the edges. For example, the optimization problem of Equation 4 may be solved through L1 regularization or L2 regularization using an ADMM algorithm.
After a stack ĝxy of spatial gradients is obtained for each wavelength, the gradient information may be used as strong spectral cues for reconstructing a hyperspectral image iopt. For example, in order to calculate the hyperspectral image iopt from the stack ĝxy of the spatial gradients, an optimization problem such as Equation 5 below may be solved.
Here, α3 and β3 denotes coefficients. Δλ denotes a Laplacian operator for a spectral image i along a spectral axis, and Wxy denotes an element-wise weighting matrix that determines the confidence level of gradients estimated in the previous stage. In order to consider the directional dependency of spectral cues, the matrix Wxy that is a confidence matrix may be configured based on the previously extracted edge information and dispersion direction. For example, for non-edge pixels, high confidence is assigned to gradient values of 0. For edge pixels, different confidence levels are assigned to horizontal and vertical components, respectively. Then, gradient directions similar to the dispersion direction have a high confidence value. In particular, a confidence value Wkϵ{x,y}(p,λ), which is an element of the matrix Wxy for the horizontal and vertical gradient components of a pixel p at the wavelength λ, is expressed by Equation 6 below.
In Equation 5, a first data term may minimize errors in the image formation model of Equation 2, and a second data term may minimize the differences between the gradient ∇xyi and the gradient ĝxy. In addition, the prior terms smooth a spectral curvature. The stability of spectral estimation may be improved by smoothing the spectral curvature along different wavelengths. To this end, Equation 5 may be solved using, for example, a conjugate gradient method.
The above-described process may be implemented by the image processor 120 performing arithmetic numerically. For example, the image processor 120 receives image data J having an RGB channel from the solid-state imaging device 111. The image processor 120 may be configured to extract clear edge information without dispersion through edge reconstruction of an input dispersion RGB image by performing a numerical analysis on the optimization problem of Equation 3. In addition, the image processor 120 may be configured to calculate spectral information in the gradient domain by using the dispersion of the extracted edges by performing a numerical analysis on the optimization problem of Equation 4. In addition, the image processor 120 may be configured to reconstruct a hyperspectral image by using sparse spectral information of gradients by performing a numerical analysis on the optimization problem of Equation 5.
The above optimization process may be performed using a neural network. First, Equation 2 is more simply expressed as Equation 7 below.
J=ΦI [Equation 7]
In Equation 7, ϕ denotes the product of Ω and ϕ described in Equation 2, J denotes a vectorized linear RGB image, and I denotes a vectorized hyperspectral image. Therefore, ϕ in Equation 7 may be regarded as a point spread function considering the response characteristics of the solid-state imaging device 111.
When an unknown prior term described in Equation 3 is simply represented as R(⋅), a hyperspectral image Î∈WHΛ×1 to be reconstructed may be expressed by Equation 8 below. Here, W, H, and Λ denote the width of a spectral image, the height of the spectral image, and the number of wavelength channels of the spectral image, respectively.
In addition, by introducing an auxiliary variable V∈WHΛ×1 and converting Equation 8 into a constrained optimization problem, Equation 9 below is obtained.
By converting Equation 9 into an unconstrained optimization problem by using a half-quadratic splitting (HQS) method, Equation 10 below is obtained.
Here, denotes a penalty parameter. Equation 10 may be solved by dividing Equation 10 into Equation 11 and Equation 12 below.
Here, I(I) and V(I) denote solutions for l-th HQS iteration.
In order to reduce the amount of computation, Equation 11 may be solved using a gradient descent method. In this way, Equation 11 may be represented as Equation 13 below.
Here, a condition
The solution of Equation 13 may be obtained through a neural network. For example,
The initial value calculator 20 provides the calculated initial value to the operation unit 60. In addition, the initial value calculator 20 is configured to calculate the second term of Equation 13 and to provide the operation unit 60 with a result of the calculation, that is, the product of a gradient descent step size and an initial value of a hyperspectral image. The operation unit 60 may include a first operation unit 30 configured to calculate the first term of Equation 13, a second operation unit 40 configured to calculate a prior term, which is the third term of Equation 13, and an adder 50 configured to add the output of the first operation unit 30, the output of the second operation unit 40, and the product of a gradient descent step size provided from the input unit 10 and an initial value of a hyperspectral image. The output of the adder 50 is repeatedly input to the first calculator 30 N times and repeatedly calculated. Finally, the output of the adder 50 may be provided to the output unit 70 after L iterations.
The image processor 120 may be configured to include such a neural network structure. Therefore, the above-described operation process may be performed in the image processor 120. For example, when an RGB image data obtained from the solid-state imaging device 111 is input to the image processor 120 including the neural network structure shown in
The prior term expressed by Equation 12 may be represented in the form of a proximal operator and may be solved through a neural network. For example, a network function S(⋅) for the hyperspectral image may be defined as V(I+1)=S(I(I+1)), and the network function S(⋅) may be solved in the form of a neural network having soft thresholding. For example,
For example, the second operation unit 40 may include an input unit 41 that receives data from the first operation unit 30, an encoder 42 that generates a feature map based on the input data, a decoder 43 that restores the feature of data based on the feature map, and an output unit 44 that outputs restored data. The encoder 42 may include a plurality of pairs of a convolution layer and a pooling layer. Although three pairs of a convolution layer and a pooling layer are illustrated in
The decoder 43 restores a feature by performing up-convolution. For example, the decoder 43 may include a plurality of pairs of up-sampling layers and convolution layers. Although three pairs of up-sampling layers and convolution layers are illustrated in
In addition, in order to improve the problem of losing information of a previous layer as the depth of a neural network increases, the neural network structure of the first operation unit 30 may use a skip connection method. For example, when the decoder 43 performs up-convolution, the decoder 43 may reflect data that has not undergone a pooling process in the pooling layer of the encoder 42, that is, data skipping the pooling process. This skip connection may be made between the convolution layer of the encoder 42 and the convolution layer of the decoder 43, which have the same data size.
The output of the decoder 43 is input to the output layer 44. The output layer 44 performs soft thresholding, with an activation function, on the output of the decoder 43 to achieve local gradient smoothness. Then, the two convolution layers are used to output a final result V(I) for the prior term. A convolution filter having a size of 3×3×Λ may be used as the convolution layer of the output layer 44. For example, A may be set to 25, but is not necessarily limited thereto.
As described above, in the hyperspectral image pickup apparatus 100 according to the example embodiment, chromatic dispersion is intentionally caused using the dispersion optical device 113, and the solid-state imaging device 111 obtains a dispersed image having an edge separated for each wavelength. Since the degree of chromatic dispersion by the dispersion optical device 113 is known in advance, a point spread function considering the dispersion optical device 113 may be calculated for each wavelength, and based on the calculated point spread function, an image for each wavelength of the entire image may be inversely calculated through the above-described optimization process. Accordingly, by using the dispersion optical device 113, it is possible to miniaturize the hyperspectral image sensor 110 and the hyperspectral image pickup apparatus 100 including the hyperspectral image sensor 110. In addition, by using the hyperspectral image sensor 110 and the hyperspectral image pickup apparatus 100, a hyperspectral image may be obtained with only one shot.
The hyperspectral image pickup apparatus 100 shown in
In the example embodiment shown in
In the hyperspectral image pickup apparatus 200 shown in
Compared to the hyperspectral image pickup apparatus 100 shown in
The hyperspectral image pickup apparatus 300 shown in
While the above-described hyperspectral image sensor and the hyperspectral image pickup apparatus including the hyperspectral image sensor have been particularly shown and described with reference to example embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims. The example embodiments should be considered in descriptive sense only and not for purposes of limitation.
While example embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2019-0133269 | Oct 2019 | KR | national |