The present disclosure relates to an information processing method and an imaging system.
Compressed sensing is a technique that reconstructs more data than observed data by assuming that the distribution of observation target data is sparse in a certain space, such as frequency space. Compressed sensing can be applied, for example, to an imaging device that reconstructs an image including more information from a small number of observation data. An imaging device to which compressed sensing is applied generates a reconstructed image through an operation from an image in which the spectral information of a target is compressed. As a result, various effects on images can be obtained such as higher resolution, wavelength expansion, shorter imaging time, or higher sensitivity.
U.S. Pat. No. 9,599,511 discloses an example in which compressed sensing technology is applied to a hyperspectral camera that acquires images in wavelength bands, each of which is a narrow bandwidth. According to the technology disclosed in U.S. Pat. No. 9,599,511, it is possible to realize a hyperspectral camera that generates high-resolution and multi-wavelength images.
In a case where reconstructed images are generated from images in which spectral information is compressed, it is desired to estimate reconstruction accuracy using information processing methods with relatively low computational cost.
In one general aspect, the techniques disclosed here feature an information processing method performed using a computer, including: acquiring a compressed image, in which information regarding wavelength bands is compressed, and identifying, based on at least one of (1) pixel values of pixels included in the compressed image, or (2) a size of a structure of a target represented by the compressed image, at least one first pixel that is included in the compressed image and that is estimated to cause a reconstruction error in images corresponding to the respective wavelength bands, the images being reconstructed from the compressed image.
The comprehensive or specific aspects of the present disclosure may be realized by a system, apparatus, method, integrated circuit, computer program, or recording medium such as a computer-readable recording disk, or in any combination of a system, apparatus, method, integrated circuit, computer program, and recording medium. Examples of the computer readable recording medium include a nonvolatile recording medium such as a compact disc read-only memory (CD-ROM). The apparatus may be formed by one or more devices. In a case where the apparatus is formed by two or more devices, the two or more devices may be disposed in one apparatus or may be disposed in two or more separate apparatuses in a divided manner. In this specification and the claims, an “apparatus” may refer not only to one apparatus but also to a system formed by apparatuses.
According to the technology of the present disclosure, in a case where reconstructed images are generated from images in which spectral information is compressed, reconstruction accuracy can be estimated using information processing methods with relatively low computational cost.
It should be noted that general or specific embodiments may be implemented as a system, a method, an integrated circuit, a computer program, a storage medium, or any selective combination thereof.
Additional benefits and advantages of the disclosed embodiments will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various embodiments and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
In the present disclosure, all or some of circuits, units, apparatuses, members, or portions or all or some of the functional blocks of a block diagram may be executed by, for example, one or more electronic circuits including a semiconductor device, a semiconductor integrated circuit (IC), or a large-scale integration circuit (LSI). The LSI or the IC may be integrated onto one chip or may be formed by combining chips. For example, functional blocks other than a storage device may be integrated onto one chip. In this case, the term LSI or IC is used; however, the term(s) to be used may change depending on the degree of integration, and the term system LSI, very large-scale integration circuit (VLSI), or ultra-large-scale integration circuit (ULSI) may be used. A field-programmable gate array (FPGA) or a reconfigurable logic device that allows reconfiguration of interconnection inside the LSI or setup of a circuit section inside the LSI can also be used for the same purpose, the FPGA and the RLD being programmed after the LSIs are manufactured.
Furthermore, functions or operations of all or one or more of the circuits, the units, the devices, the members, or the portions can be executed through software processing. In this case, the software is recorded in one or more non-transitory recording media, such as a read-only memory (ROM), an optical disc, or a hard disk drive, and when the software is executed by a processing device (a processor), the function specified by the software is executed by the processing device and peripheral devices. The system or the apparatus may include the one or more non-transitory recording media in which the software is recorded, the processing device (the processor), and a hardware device to be needed, such as an interface.
In the present disclosure, “light” refers not only to visible light (wavelengths from about 400 nm to about 700 nm) but also to electromagnetic waves including ultraviolet rays (wavelengths from about 10 nm to about 400 nm) and infrared rays (wavelengths from about 700 nm to about 1 mm).
In the following, examples of embodiments of the present disclosure will be described. Note that any one of the embodiments to be described below is intended to represent a general or specific example. The numerical values, shapes, constituent elements, arrangement positions and connection forms of the constituent elements, steps, and the order of steps are examples, and are not intended to limit the present disclosure. Among the constituent elements of the following embodiments, constituent elements that are not described in independent claims representing the most generic concept will be described as optional constituent elements. Each diagram is a schematic diagram and is not necessarily precisely illustrated. Furthermore, in each diagram, substantially the same or similar constituent elements are denoted by the same reference signs. Redundant description may be omitted or simplified.
Before describing the embodiments of the present disclosure, the findings underlying the present disclosure will be described.
The process of classifying targets present in images by type in the field of imaging is used in fields, such as factory automation (FA) and medicine. Examples of features used in the classification process include spectral information regarding the targets as well as the shapes of the targets. Since hyperspectral cameras can acquire multi-wavelength images that include a large amount of spectral information on a pixel basis, the use of hyperspectral cameras is expected in the future. Although research and development of hyperspectral cameras has been conducted worldwide for many years, their use has been limited for the following reasons. For example, line-scan hyperspectral cameras provide high spatial and wavelength resolution, but take longer to capture images due to line scanning. Snapshot hyperspectral cameras can capture images in a single shot, but often lack sufficient sensitivity and spatial resolution.
In contrast, it has recently been reported that the sensitivity and spatial resolution of hyperspectral cameras can be improved by reconstructing images on the basis of sparsity. Sparsity is the property that the elements characterizing an observation target are present in a certain space, such as frequency space, in a sparse manner. Sparsity is widely observed in the natural world. The use of sparsity makes it possible to efficiently observe necessary information. Sparsity-based sensing technology is called compressed sensing. Compressed sensing technology can be used to construct highly efficient devices and systems.
As a specific application example of compressed sensing technology, a hyperspectral camera with improved wavelength resolution has been proposed, as disclosed in U.S. Pat. No. 9,599,511, for example. Such hyperspectral cameras are equipped, for example, with optical filters that have irregular optical transmission characteristics with respect to space, wavelength, or both. Such optical filters are also referred to as “encoding masks”. An encoding mask is disposed along an optical path of light incident on an image sensor and transmits the incident light from the target so as to have region-dependent optical transmission characteristics. This process performed by the encoding mask is referred to as “encoding”. The spectral information regarding the target is compressed in the image of the target acquired through the encoding mask. The image is referred to as a “compressed image”. Mask information indicating the optical transmittance characteristics of the encoding mask is stored in advance in the memory device as a reconstruction table.
The processing device of the imaging device performs a reconstruction process on the basis of the compressed image and the reconstruction table. The reconstruction process retrieves more information, such as higher resolution image information or image information covering more wavelengths, than the compressed image contains. The reconstruction table may be, for example, data representing the spatial distribution of the optical response characteristics of the encoding mask. The reconstruction process based on such a reconstruction table can generate reconstructed images, which correspond to the respective wavelength bands included in the target wavelength range, from a single compressed image. In the following description, “reconstructed images” are also referred to simply as “reconstructed images”.
Since compressed sensing technology generates reconstructed images from compressed images, the reconstruction accuracy of the reconstructed images will be low in a case where information necessary for computation is missing in the compressed images. Since low reconstruction accuracy affects analysis accuracy, as a countermeasure, a method for estimating the reconstruction accuracy of the reconstructed images by comparing the reconstructed images with the correct images generated using a different method from the compressed sensing technology has been reported. This method generates reconstructed images from compressed images to estimate reconstruction accuracy, and thus leading to the issue of higher computational cost.
The inventor considered the above issue and conceived of the information processing method according to the embodiments of the present disclosure, which can estimate, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from compressed images. In this information processing method, the reconstruction accuracy is estimated on the basis of the lack of information in a compressed image itself, which correlates to the reconstruction accuracy. Thus, the reconstruction accuracy can be estimated at a relatively low computational cost without needing to generate reconstructed images from compressed images.
In the following, first, an imaging system that generates reconstructed images from compressed images will be described. Next, a method for estimating reconstruction accuracy from compressed images will be described.
The filter array 110 in the present embodiment is an array of filters arranged in rows and columns and having translucency. The filters include different kinds of filters having different spectral transmittances from each other, that is, having different wavelength dependencies on optical transmittance from each other. The filter array 110 modulates the intensity of incident light on a wavelength basis and outputs the resulting light. This process performed by the filter array 110 will be referred to as “encoding”, and the filter array 110 will be also referred to as a “coding element”.
In the example illustrated in
The optical system 140 includes at least one lens. In
The filter array 110 may be disposed so as to be spaced apart from the image sensor 160.
The image sensor 160 is a monochrome light detector having light detection devices (also referred to as “pixels” in this specification) arranged two-dimensionally. The image sensor 160 may be, for example, a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS) sensor, or an infrared array sensor. The light detection devices include, for example, a photodiode. The image sensor 160 is not necessarily a monochrome sensor. For example, color sensors may be used. A color sensor may include, for example, red (R) filters transmitting red light, green (G) filters transmitting green light, and blue (B) filters transmitting blue light. A color sensor may further include IR filters that transmit infrared light. Moreover, a color sensor may include transparent filters that transmit all red, green, and blue light. The use of a color sensor can increase the amount of information regarding wavelengths and improve the reconstruction accuracy of the hyperspectral image 20. A wavelength region as an acquisition target may be freely determined. The wavelength region is not limited to the visible wavelength region and may also be the ultraviolet wavelength region, the near infrared wavelength region, the mid-infrared wavelength region, or the far-infrared wavelength region.
The image processing apparatus 200 is a computer including one or more processors and one or more storage media, such as a memory. The image processing apparatus 200 generates data of reconstructed images 20W1, 20W2, . . . , 20WN on the basis of the compressed image 10 acquired by the image sensor 160.
In the example illustrated in
In the example illustrated in
In this manner, the optical transmittance of each region varies with wavelength. Thus, the filter array 110 allows a large amount of a certain wavelength range component of incident light to pass therethrough but does not allow a large portion of another wavelength range component of incident light to pass therethrough. For example, the transmittance of light of k wavelength bands out of N wavelength bands may be greater than 0.5, and the transmittance of light of the other N−k wavelength ranges may be less than 0.5, where k is an integer that satisfies 2≤k<N. If incident light is white light, which includes all the visible light wavelength components equally, the filter array 110 modulates, on a region basis, the incident light into light having discrete peaks in intensity for wavelengths and superposes and outputs light of these multiple wavelengths.
In the example illustrated in
Some of all the cells, for example, half the cells may be replaced with transparent regions. Such transparent regions allow light of each of the wavelength bands W1, W2, . . . , WN included in the target wavelength range W to pass therethrough at a similarly high transmittance, for example, 80% or higher. With such a configuration, the transparent regions are disposed, for example, in a checkerboard manner. That is, the regions having optical transmittance that varies with wavelength and the transparent regions may be arranged in an alternating manner in two directions of the arrayed regions in the filter array 110.
Data representing such a spatial distribution of the spectral transmittance of the filter array 110 is acquired beforehand on the basis of design data or by performing actual measurement calibration, and is stored in a storage medium of the image processing apparatus 200. This data is used in arithmetic processing to be described later.
The filter array 110 may be formed using, for example, multi-layer films, organic materials, diffraction grating structures, metal-containing microstructures, or metasurfaces. In a case where a multi-layer film is used, for example, a dielectric multilayer film or a multi-layer film including a metal layer may be used. In this case, the cells are formed such that at least the thicknesses, materials, or stacking orders of the layers of the multi-layer film are made different from cell to cell. As a result, spectral characteristics that are different from cell to cell can be realized. By using a multi-layer film, a sharp rising edge and a sharp falling edge can be realized for spectral transmittance. A configuration using organic material can be realized by causing different cells to contain different pigments or dyes or by causing different cells to have different stacks of layers of materials. A configuration using a diffraction grating structure can be realized by causing different cells to have structures with different diffraction pitches or different depths. A metal-containing microstructure can be fabricated using plasmon effect spectroscopy. A metasurface can be fabricated by microprocessing dielectric materials in sizes smaller than the wavelength of incident light. In this structure, the refractive index to the incident light is spatially modulated. Alternatively, incident light may be encoded by directly processing the pixels included in the image sensor 160, without using the filter array 110.
From the above, it can be said that the imaging device 100 has light receiving regions having optical response characteristics different from one another. In a case where the imaging device 100 is equipped with the filter array 110 including filters and where the filters have irregularly different optical transmission characteristics from one another, the light receiving regions may be realized by the image sensor 160 with the filter array 110 that is disposed near or directly on the image sensor 160. In this case, the optical response characteristics of the light receiving regions are determined on the basis of the optical transmission characteristics of the respective filters included in the filter array 110.
Alternatively, in a case where the imaging device 100 is not equipped with the filter array 110, the light receiving regions may be realized, for example, by the image sensor 160 having pixels directly processed to have irregularly different optical response characteristics from one another. In this case, the optical response characteristics of the light receiving regions are determined on the basis of the optical response characteristics of the respective pixels included in the image sensor 160.
The above multi-layer films, organic materials, diffraction grating structures, metal-containing microstructures, or metasurfaces can encode incident light in a case where they are in a configuration in which spectral transmittance is modulated so as to vary with position in a two-dimensional plane. Thus, the above multi-layer films, organic materials, diffraction grating structures, metal-containing microstructures, or metasurfaces need not be in a configuration in which filters are disposed in an array.
Next, an example of signal processing performed by the image processing apparatus 200 will be described. The image processing apparatus 200 reconstructs a hyperspectral image 20, which is a multi-wavelength image, on the basis of the compressed image 10 output from the image sensor 160 and characteristics of a transmittance spatial distribution for each wavelength of the filter array 110. In this case, “multi-wavelength” refers to, for example, more wavelength ranges than 3-color wavelength ranges, which are RGB wavelength ranges, acquired by normal color cameras. The number of such wavelength ranges may be, for example, any number between 4 and about 100. The number of such wavelength ranges will be referred to as the “number of bands”. Depending on applications, the number of bands may exceed 100.
Data to be obtained is data of the hyperspectral image 20, and the data will be denoted by f. If the number of bands is N, f is data obtained by integrating data of N image bands f1, f2, . . . , fN. In this case, suppose that the horizontal direction of the image is the x-direction, and the vertical direction of the image is the y-direction. When the number of pixels in the x-direction for the image data to be obtained is m, and the number of pixels in the y-direction for the image data to be obtained is n, each of the image data f1, f2, . . . , fN has n×m pixel values. Thus, the data f is data having n×m×N elements. In contrast, data g of the compressed image 10 acquired by the filter array 110 through encoding and multiplexing has n×m elements.
The data g can be expressed by the following Eq. (1).
In Eq. (1), f represents the data of the hyperspectral image expressed as a one-dimensional vector. Each of f1, f2, . . . , and fN has n×m elements. Thus, the vector on the right side is strictly a one-dimensional vector having n×m×N rows and one column. The data g of the compressed image is calculated as a one-dimensional vector having n×m rows and one column. A matrix H represents a conversion in which individual components f1, f2, . . . , fN of the vector f are encoded and intensity-modulated using encoding information that varies on a wavelength band basis, and are then added to one another. Thus, H denotes a matrix having n×m rows and n×m×N columns. Eq. (1) can also be expressed as follows.
g=(pg11. . . pg1m. . . pgn1. . . pgnm)T=H(f1. . . fN)T,
where pgij denotes the pixel value in the i-th row and j-th column of the compressed image 10.
When the vector g and the matrix H are given, it seems that f can be calculated by solving an inverse problem of Eq. (1). However, the number of elements (n×m×N) of the data f to be obtained is greater than the number of elements (n×m) of the acquired data g, and thus this problem is an ill-posed problem, and the problem cannot be solved as is. Thus, the image processing apparatus 200 uses the redundancy of the images included in the data f and uses a compressed sensing method to obtain a solution. Specifically, the data f to be obtained is estimated by solving the following Eq. (2).
In this case, f denotes the data of the estimated f. The first term in the braces of the equation above represents a shift between an estimation result Hf and the acquired data g, which is a so-called residual term. In this case, the sum of squares is treated as the residual term; however, an absolute value, a root-sum-square value, or the like may be treated as the residual term. The second term in the braces is a regularization term or a stabilization term. Eq. (2) means to obtain f that minimizes the sum of the first term and the second term. The function in the braces in Eq. (2) is called the evaluation function. The image processing apparatus 200 can cause the solution to converge through a recursive iterative operation and can calculate f that minimizes the evaluation function as the final solution f.
The first term in the braces of Eq. (2) refers to a calculation for obtaining the sum of squares of the differences between the acquired data g and Hf, which is obtained by converting f in the estimation process using the matrix H. The second term Φ(f) is a constraint for regularization of f and is a function that reflects sparse information regarding estimated data. This function has the effect of making the estimated data smooth and stable. The regularization term can be expressed using, for example, discrete cosine transformation (DCT), wavelet transform, Fourier transform, or total variation (TV) of f. For example, in a case where total variation is used, stabilized estimated data can be acquired in which the effect of noise of the data g, observation data, is suppressed. The sparsity of the target 70 in the space of each regularization term differs with the texture of the target 70. A regularization term for which the texture of the target 70 becomes sparser in the space of the regularization term may be selected. Alternatively, regularization terms may be included in calculation. T is the weighting factor. The greater the weighting factor r, the greater the amount of reduction of redundant data, thereby increasing a compression rate. The smaller the weighting factor r, the lower the convergence to the solution. The weighting factor T is set to an appropriate value with which f is converged to a certain degree and is not compressed too much.
Note that, in the configurations illustrated in
Through the above-described process, the hyperspectral image 20 can be reconstructed from the compressed image 10 acquired by the image sensor 160. Details of the method for reconstructing the hyperspectral image 20 are disclosed in U.S. Pat. No. 9,599,511.
2.1. Method for Estimating Reconstruction Error Factors from Compressed Image
Next, a method for estimating reconstruction error factors from a compressed image will be described with reference to
(A) In a case where the pixel values of some pixels included in the compressed image 10 are saturated or where the pixel values of some pixels included in the compressed image 10 are low relative to the dark noise, the reconstruction error 102 occurs in the hyperspectral image 20.
(B) In a case where the structural units of the target are small relative to the compressed image 10, this violates the assumption that “the image is smooth” in the reconstruction operation, so that the reconstruction error 102 occurs in the hyperspectral image 20.
(C) In a case where the encoding information of the filter array 110 used to acquire the compressed image 10 is different from the encoding information of the filter array 110 at the time of shipment, the reconstruction error 102 occurs in the hyperspectral image 20 due to the inconsistency of these two pieces of encoding information. In this specification, the encoding information of the filter array 110 at the time of shipment is the encoding information of the filter array 110 obtained through calibration prior to shipment.
In the information processing method according to the present embodiment, the reconstruction accuracy is estimated by determining whether or not at least one pixel included in the compressed image 10 corresponds to the reconstruction error factors (A) to (C). The reconstruction accuracy can be estimated at a relatively low computational cost since there is no need to generate the hyperspectral image 20 from the compressed image 10. The reconstruction accuracy may be estimated before the reconstruction operation for generating the hyperspectral image 20 or in parallel with the reconstruction operation.
In a case where the compressed image 10 has the reconstruction error factor 101, the reconstruction error propagates not only to the pixels corresponding to the reconstruction error factor 101 but also to their surrounding pixels. The reason for this is that in the reconstruction operation, the evaluation function expressed by Eq. (2) causes the solution to converge to a small value under the assumption that “the image is smooth”. In the second term Φ included in the evaluation function expressed by Eq. (2), for each of the pixels included in the image, the sum of the differences between the pixel value of the pixel and the pixel values of the pixels directly above, below, to the left, and to the right of the pixel is calculated. This means converging to a solution where the pixel values between adjacent pixels are smaller. Thus, when some pixels are saturated, as in the case of the reconstruction error factor (A), the reconstruction error propagates to their surrounding pixels. The same is true for the cases of the reconstruction error factors (B) and (C).
Estimating the reconstruction error factor 101 and notifying the user or the imaging system of the pixels corresponding to the reconstruction error factor 101 in the compressed image 10 will help improve the image capturing environment, such as illumination and exposure time, resulting in solving the reconstruction error factor 101. The reconstruction error factor 101 can basically be determined on a pixel-by-pixel basis. However, since it is assumed that the image is smooth in the reconstruction operation, the reconstruction error propagates to the surrounding pixels. Based on this, the pixels corresponding to the reconstruction error factor 101 may be reported. Pixels to be reported may be the following (a) to (c).
In addition to the above colored highlighting, position information regarding the pixels corresponding to the reconstruction error factors 101 and the reconstruction error factors 101 may be output as text data. Alternatively, the exposure time and illumination conditions of a camera, for example, may be automatically adjusted as the settings of another device in an image shooting system, in order to eliminate the reconstruction error factors 101.
In the following, each structural element of the imaging system illustrated in
The image processing apparatus 200 includes a processing circuit 210, a memory 212, and a storage device 220. The processing circuit 210 controls the operation of the imaging device 100, storage device 220, and display device 300. The details of a reconstruction accuracy estimation operation performed by the processing circuit 210 will be described later. A computer program executed by the processing circuit 210 is stored in the memory 212, such as a read-only memory (ROM) or a random access memory (RAM). The processing circuit 210 and the memory 212 may be integrated on one circuit board or on separate circuit boards. The processing circuit 210 may be distributed across circuits. The processing circuit 210, the memory 212, or both may be remotely located away from other structural elements via a wired or wireless communication network.
The storage device 220 includes one or more storage media. Each storage medium may be any storage medium, such as a semiconductor memory, a magnetic storage medium, or an optical storage medium, for example. The storage device 220 stores a reconstruction table. The reconstruction table is an example of encoding information indicating the optical transmission characteristics of the filter array 110, which functions as an encoding mask, and is acquired through calibration before shipment. The reconstruction table may be, for example, tabular data representing the matrix H in Eq. (2).
The display device 300 displays an input user interface (UI) 310 and a display UI 320. The input UI 310 is used by the user to input information. The information input by the user through the input UI 310 is received by the processing circuit 210. The display UI 320 is used to display estimation results of the reconstruction accuracy.
The input UI 310 and the display UI 320 are displayed as a graphical user interface (GUI). The information presented through the input UI 310 and the display UI 320 can also be said to be displayed by the display device 300. The input UI 310 and the display UI 320 may be realized by a device that allows both input and output, such as a touch screen. In such cases, the touch screen may function as the display device 300. In a case where a keyboard, a mouse, or both are used as the input UI 310, the input UI 310 is a device independent of the display device 300.
Next, regarding the above-described reconstruction error factors (A) to (C), examples of the reconstruction accuracy estimation operation will be described.
First, with reference to
The processing circuit 210 causes the imaging device 100 to capture a compressed image 10 of the target 70 and acquires the compressed image 10. The compressed image 10 may be, for example, a monochrome image of the target acquired by the image sensor 160 through the filter array 110. The user may input imaging conditions of the imaging device 100 through the input UI 310 before acquiring the compressed image. The imaging conditions may be, for example, at least one of the exposure time of the imaging device 100 or the illuminance of illumination.
The processing circuit 210 determines whether or not the pixel value of each of the pixels included in the compressed image 10 exceeds a predetermined upper limit or falls below a predetermined lower limit. In other words, the processing circuit 210 determines whether or not the compressed image 10 includes pixels whose pixel values exceed the predetermined upper limit or fall below the predetermined lower limit. In a case where the dynamic range of the imaging device 100 is 8 bits, the predetermined upper limit may be, for example, 255, and the predetermined lower limit may be, for example, 10. In a case where pixel values exceed 255, the pixel values are saturated, and in a case where pixel values fall below 10, the pixel values are lower than the dark noise. In a case where a determination of Yes is obtained, namely, where a pixel value that exceeds the predetermined upper limit is identified or a pixel value that falls below the predetermined lower limit is identified, the processing circuit 210 performs the operation in Step S103. In a case where a determination of No is obtained, namely, where a pixel value that exceeds the predetermined upper limit is not identified or a pixel value that falls below the predetermined lower limit is not identified, the processing circuit 210 terminates the processing operation for estimating reconstruction accuracy.
In this specification, about the reconstruction error factor (A), “a reconstruction error is estimated” refers to a case where one or more pixels identified on the basis of their pixel values as in the above example are determined to be included in the compressed image 10. “A reconstruction error is not estimated” refers to a case where one or more pixels identified on the basis of their pixel values as in the above example are determined not to be included in the compressed image 10.
The processing circuit 210 notifies the user of the corresponding pixels in the compressed image 10 through the display UI 320. The number of corresponding pixels is one or more.
The processing circuit 210 determines whether or not to change the imaging conditions. In a case where the imaging conditions are to be changed, the processing circuit 210 may provide an output recommending the user to adjust at least one of the exposure time of the imaging device 100, which has acquired the compressed image 10, or the illumination in the image capturing environment for acquiring the compressed image 10 through the display UI 320, for example. In order to make it possible for the user to make a determination, the processing circuit 210 may receive an input from the user through the input UI 310 to determine whether or not to change the imaging conditions. In a case where the corresponding pixels are not connected across a certain range in Step S103, the reconstruction errors do not propagate to the surrounding pixels, so that the imaging conditions need not be changed. In a case where a determination of Yes is obtained, the processing circuit 210 performs the operation in Step S105. In a case where a determination of No is obtained, the processing circuit 210 terminates the processing operation for estimating reconstruction accuracy.
The processing circuit 210 changes the imaging conditions. In that case, the processing circuit 210 outputs a signal to change at least one of the exposure time of the imaging device 100, which has acquired the compressed image 10, or the output of the illumination in the image capturing environment for acquiring the compressed image 10. The processing circuit 210 performs the operation in Step S101 again after the operation in Step S105.
As described above, the information processing method performed using a computer according to the present embodiment includes the following two operations.
Identifying the at least one first pixel includes the following two operations.
As described above, since images are assumed to be smooth in the reconstruction operation, not only the pixels corresponding to the reconstruction error factors 101 in the compressed image 10 but also their surrounding pixels surrounding the pixels may be determined to be pixels corresponding to the reconstruction error factors 101. In this specification, the surrounding pixels are also referred to as “second pixels”. In the range formed by these surrounding pixels, for example, the four pixels directly above, below, to the left, and to the right of each corresponding pixel may be set to a fixed value. This fixed value may be input by the user through the input UI 310 or set automatically by the processing circuit 210. Not only the at least one first pixel above but also the second pixels surrounding the at least one first pixel may be displayed in a highlighted manner.
In a case where the pixel value of a certain pixel exceeds the predetermined upper limit and is saturated, the reconstruction error basically propagates from the pixel to about four pixels. However, this may vary depending on the target and the image capturing conditions. For example, in a case where the pixel values of some pixels exceed the predetermined upper limit and their surrounding pixels fall below the predetermined lower limit, the reconstruction errors will propagate from each of these pixels to four or more pixels. A fixed value for the above surrounding pixels is set with this effect taken into consideration.
(c) Pixels Connected Across Certain Range by being Adjacent Among Pixels Corresponding to Reconstruction Error Factors 101
As described above, since images are assumed to be smooth, there may be a case where the pixels whose pixel values are saturated are complemented in the reconstruction operation if the number of such pixels is small. For example, in a case where pixel values are saturated in a pixel region where 1×1 or 2×2 pixels are connected, there may be a case where saturated pixels are complemented under the assumption that the differences in pixel value between the pixels and their surrounding pixels are smooth. In this case, it may be determined that no pixels correspond to the reconstruction error factors 101. Pixel regions that can be complemented and the threshold for the number of pixels can be set. For typical targets, pixel regions that can be complemented are pixel regions of about four pixels per side. Thus, in a case where there are four or more adjacent pixels, for example, the pixels may be highlighted.
Next, with reference to
The processing circuit 210 causes the imaging device 100 to capture a compressed image 10 of the target 70 and acquires the compressed image 10. The user may input imaging conditions of the imaging device 100 through the input UI 310 before acquiring the compressed image. The imaging conditions may include, for example, the zoom magnification of the imaging device 100.
The processing circuit 210 performs edge detection on the compressed image and classifies the target 70 into structural units. For example, the Canny method may be used for edge detection.
The processing circuit 210 finds the short side of each structural unit and determines whether or not the short side falls below a predetermined lower limit. In other words, the processing circuit 210 determines whether or not the compressed image 10 includes pixels corresponding to the structural unit of the target 70 whose short side falls below the predetermined lower limit. The predetermined lower limit may be set in advance by the user through the input UI 310 or set automatically by the processing circuit 210. In a case where high reconstruction accuracy is desired, the predetermined lower limit may be, for example, a length of greater than or equal to four pixels and less than or equal to ten pixels. In a case where it is sufficient that the target 70 be visible in the compressed image 10, the predetermined lower limit may be, for example, a length of greater than or equal to two pixels and less than or equal to four pixels. The predetermined lower limit may be a length of one pixel. In a case where a determination of Yes is obtained, namely, where pixels corresponding to the structural unit whose short side falls below the predetermined lower limit are identified, the processing circuit 210 performs the operation in Step S204. In a case where a determination of No is obtained, namely, where pixels corresponding to the structural unit whose short side falls below the predetermined lower limit are not identified, the processing circuit 210 terminates the processing operation for estimating reconstruction accuracy.
In this specification, about the reconstruction error factor (B), “a reconstruction error is estimated” refers to a case where one or more pixels identified on the basis of the size of the structure of the target 70 as in the above example are determined to be included in the compressed image 10. “A reconstruction error is not estimated” refers to a case where one or more pixels identified on the basis of the size of the structure of the target 70 as in the above example are determined not to be included in the compressed image 10.
The processing circuit 210 notifies the user of the corresponding pixels in the compressed image 10 through the display UI 320. The number of corresponding pixels is one or more.
The processing circuit 210 determines whether or not to change the imaging conditions. In a case where the target 70 is not suitable as an imaging target, a case where the imaging conditions for the imaging device 100 are not properly adjusted, or both cases, the processing circuit 210 changes the imaging conditions. In a case where the imaging conditions are to be changed, the processing circuit 210 may provide an output recommending the user to change at least one of the target 70 included in the compressed image 10 or the image capturing region of the imaging device 100, which acquires the compressed image 10, through the display UI 320, for example. In a case where the corresponding pixels are not connected across a certain range in Step S204, the reconstruction errors do not propagate to the surrounding pixels, so that the imaging conditions need not be changed. In order to make it possible for the user to make a determination, the processing circuit 210 may receive an input from the user through the input UI 310 to determine whether or not to change the imaging conditions. In a case where a determination of Yes is obtained, the processing circuit 210 performs the operation in Step S206. In a case where a determination of No is obtained, the processing circuit 210 terminates the processing operation for estimating reconstruction accuracy.
The processing circuit 210 changes the imaging conditions. In that case, the processing circuit 210 may output a signal to change the zoom magnification of the imaging device 100, which acquires the compressed image 10, for example. The processing circuit 210 performs the operation in Step S201 again after the operation in Step S206.
As described above, the information processing method performed using a computer according to the present embodiment includes the following two operations.
Identifying the first pixel(s) includes the following three operations.
The size of the outline of the structure may be, for example, the length of the short side of the above structural unit.
Next, with reference to
Reconstruction errors may occur in the hyperspectral image 20 due to the aging of the imaging device 100 from the time of shipment. In the imaging device 100 using compressed sensing technology, reconstructed images 20W1, 20W2, . . . , 20WN are generated from a compressed image by performing the reconstruction operation using the encoding information of the filter array 110 at the time of shipment. Thus, if it is possible to evaluate the consistency between the encoding information of the filter array 110 used to acquire the compressed image 10 and the encoding information of the filter array 110 obtained at the time of shipment, the reconstruction errors in the hyperspectral image 20 can be estimated without performing the reconstruction operation. In this case, the “consistency” means that the difference between the encoding information of the filter array 110 used to acquire the compressed image 10 and the encoding information of the filter array 110 obtained at the time of shipment is negligibly small.
Whether or not foreign matter has adhered to the filter array 110 can be determined using the encoding information of the filter array 110 obtained at the time of shipment. The reconstruction table including the encoding information of the filter array 110 obtained at the time of shipment is stored in the storage device 220 illustrated in
As an example only, the elements of the two-dimensional matrix included in the reconstruction table and obtained by averaging the optical transmittance of each of the filters across all wavelength bands represent the average optical transmittance of each filter in the filter array 110. Since the filters correspond one-to-one to the pixels, “each filter” may be read as “each pixel”. On the basis of the results of comparing the elements of the two-dimensional matrix with the compressed image 10, it becomes possible to determine whether or not foreign matter has adhered to the filter array 110.
A comparison image 12 illustrated in
As described above, in a case where the pixels corresponding to the reconstruction error factors 101 are identified in the compressed image 10 using the encoding information of the filter array 110 obtained at the time of shipment, it can be estimated that an abnormality has occurred in the imaging device 100 including the filter array 110. In such cases, the user is notified of an abnormality and a maintenance recommendation for the imaging device 100 via the display UI 320.
The processing circuit 210 reads the reconstruction table from the storage device 220 and calculates, for all wavelength bands, the average transmittance for each pixel.
The processing circuit 210 causes the imaging device 100 to capture a compressed image 10 of the target 70 and acquires the compressed image 10.
The processing circuit 210 generates a comparison image 12 on the basis of the compressed image 10 and the average optical transmittance of each pixel obtained from the reconstruction table.
The processing circuit 210 determines whether or not there is a shadow in the comparison image 12. In other words, the processing circuit 210 determines whether or not a pixel corresponding to a shadow in the comparison image 12 is included in the compressed image 10. In a case where a determination of Yes is obtained, namely, where a pixel corresponding to a shadow is identified, the processing circuit 210 performs the operation in Step S305. In a case where a determination of No is obtained, namely, where a pixel corresponding to a shadow is not identified, the processing circuit 210 terminates the processing operation for estimating reconstruction accuracy.
In this specification, about the reconstruction error factor (C), “a reconstruction error is estimated” refers to a case where one or more pixels identified on the basis of the encoding information in addition to the pixel values as in the above example are determined to be included in the compressed image 10. “A reconstruction error is not estimated” refers to a case where one or more pixels identified on the basis of the encoding information in addition to the pixel values as in the above example are determined not to be included in the compressed image 10.
The processing circuit 210 notifies the user of the corresponding pixels in the compressed image 10 through the display UI 320. The number of corresponding pixels is one or more. During the notification, the processing circuit 210 provides at least one of an output indicating that the optical system included in the imaging device 100 that has acquired the compressed image 10 has an abnormality or an output recommending the user to perform maintenance of the imaging device 100. The display UI 320 may display a pop-up, such as “There may be foreign matter on the lens. Please clean the lens”. If the situation remains unchanged, a pop-up, such as “There may be an abnormality inside the imaging device. Please send the imaging device back to the manufacturer” may be displayed.
As described above, the information processing method performed using a computer according to the present embodiment includes the following two operations.
The encoding information includes matrix elements corresponding to the respective wavelength bands. More specifically, the encoding information represents, for each of the wavelength bands, the optical transmittance of each of the filters arranged two-dimensionally in the filter array 110.
Identifying the first pixels includes identifying the first pixels on the basis of the compressed image and elements obtained by averaging, across the wavelength bands, the matrix elements corresponding to the respective wavelength bands.
In this specification, identifying at least one first pixel on the basis of the pixel values of the pixels included in the compressed image 10 includes not only estimating reconstruction accuracy for the reconstruction error factor (A) but also estimating reconstruction accuracy for the reconstruction error factor (C).
Note that in the above example described with reference to
In this specification, “at least one of α, β, or γ” and “at least one of α to γ” refer to only α, only β, only γ, α and β, and γ, α and γ, or all α, β, and γ.
Next, with reference to
The processing circuit 210 causes the imaging device 100 to capture a compressed image 10 of the target 70 and acquires the compressed image 10. The operation in Step S401 illustrated in
The processing circuit 210 determines whether or not the pixel value of each of the pixels included in the compressed image 10 exceeds the predetermined upper limit. In a case where a determination of Yes is obtained, namely, where one or more pixels having pixel values greater than the upper limit are identified, the processing circuit 210 terminates the processing operation without generating a hyperspectral image 20. In a case where a determination of No is obtained, namely, where one or more pixels having pixel values greater than the upper limit are not identified, the processing circuit 210 performs the operation in Step S403.
The processing circuit 210 generates a hyperspectral image 20 on the basis of the compressed image acquired in Step S401 and the reconstruction table.
In Modification 1, in a case where a determination of Yes is obtained in Step S402, the processing circuit 210 does not generate a hyperspectral image 20, and in a case where a determination of No is obtained in Step S402, the processing circuit 210 generates a hyperspectral image 20. This switching operation can prevent the generation of a hyperspectral image 20 with low reconstruction accuracy, in a case where the compressed image 10 includes pixels having pixel values that exceed the upper limit.
In Modification 1, the above switching operation is performed for the reconstruction error factor (A) in accordance with the determinations as to whether or not the pixel value of each pixel exceeds the upper limit, but is not limited to this example. The above switching operation may be performed for the reconstruction error factor (B) in accordance with the determinations as to whether or not the short sides of the structural units of the target 70 fall below the lower limit. Alternatively, the above switching operation may be performed for the reconstruction error factor (C) in accordance with the determination performed by the processing circuit 210 as to whether or not there is a shadow in the comparison image 12.
That is, the processing circuit 210 may perform the operations in Steps S201 to S203 illustrated in
The operation in Step S501 is the same as the operation in Step S101 illustrated in
The processing circuit 210 identifies the region of the target 70 in the compressed image 10. For example, the following three methods may be used as the method for identifying the region of the target 70.
The model is generated through the learning of various targets using machine learning algorithms. Machine learning can use, for example, deep learning, support vector machines, decision trees, genetic programming, or Bayesian network algorithms. In a case where deep learning is used, for example, algorithms such as convolutional neural networks (CNN) or recurrent neural networks (RNN) may be used.
The operation in Step S503 is the same as the operation in Step S402 illustrated in
The processing circuit 210 determines whether or not one or more pixels whose pixel values exceed the upper limit are present within the region of the target 70. In a case where a determination of Yes is obtained, namely, where one or more such pixels are present within the region of the target 70, the processing circuit 210 terminates the processing operation without generating a hyperspectral image 20. In a case where a determination of No is obtained, namely, where one or more such pixels are present outside the region of the target 70, the processing circuit 210 performs the operation in Step S505.
The operation in Step S505 is the same as the operation in Step S403 illustrated in
In Modification 2, in a case where a determination of Yes is obtained in Step S503 and where a determination of Yes is obtained in Step S504, the processing circuit 210 does not generate a hyperspectral image 20. In a case where a determination of No is obtained in Step S503 or S504, the processing circuit 210 generates a hyperspectral image 20. This switching operation can prevent the generation of a hyperspectral image 20 in which the reconstruction accuracy of the target 70 is low, in a case where the region of the target 70 within the compressed image 10 has pixels having pixel values that exceed the upper limit.
In Modification 2, the above switching operation is performed for the reconstruction error factor (A) in accordance with the determinations as to whether or not the pixel value of each pixel exceeds the upper limit, but is not limited to this example. The above switching operation may be performed for the reconstruction error factor (B) in accordance with the determinations as to whether or not the short sides of the structural units of the target 70 fall below the lower limit. Alternatively, the above switching operation may be performed for the reconstruction error factor (C) in accordance with the determination as to whether or not there is a shadow in the comparison image 12.
That is, the processing circuit 210 may perform the operations in Steps S201 and S202 illustrated in
Alternatively, the processing circuit 210 may perform the operations in Steps S301 to S303 illustrated in
The processing circuit 210 extracts a partial image from the compressed image 10 so as to exclude pixels whose pixel values exceed the upper limit. The partial image may be configured to exclude only such pixels. Alternatively, the partial image may be configured to exclude any region other than the region of the target 70.
The processing circuit 210 generates a hyperspectral image 20 on the basis of the partial image extracted from the compressed image 10 and the edited reconstruction table. The edited reconstruction table has, for each of all wavelength bands included in the target wavelength range, the optical transmittance data of each of the pixels corresponding to the partial image. A method for generating a hyperspectral image 20 from a partial image is disclosed, for example, in International Publication No. 2021/246192.
In Modification 3, in a case where a determination of Yes is obtained in Step S503 and where a determination of Yes is obtained in Step S504, the processing circuit 210 does not generate a hyperspectral image 20. In a case where a determination of No is obtained in Step S503, the processing circuit 210 generates a hyperspectral image 20 on the basis of the compressed image 10 and the reconstruction table. In a case where a determination of No is obtained in Step S504, the processing circuit 210 generates a hyperspectral image 20 on the basis of the partial image and the edited reconstruction table. This switching operation makes it possible to generate a hyperspectral image 20 in which the reconstruction accuracy of the target 70 is high.
The processing circuit 210 outputs a warning notification to the user via the display UI 320. The warning includes a message that an accurate reconstructed image cannot be generated from the current compressed image 10, as illustrated in
The processing circuit 210 may output, as described above, a signal to change the imaging conditions or display the pixels whose pixel values exceed the predetermined upper limit in a highlighted manner, instead of outputting a warning notification to the user. Alternatively, the processing circuit 210 may perform two or all of the following: outputting a warning notification to the user, outputting a signal to change the imaging conditions, and displaying the pixels whose pixel values exceed the predetermined upper limit in a highlighted manner. That is, the processing circuit 210 performs at least one of outputting a warning notification to the user, outputting a signal to change the imaging conditions, or displaying the pixels in a highlighted manner.
Note that in a case where a determination of No is obtained in Step S503 or S504, the processing circuit 210 does not perform any of the following: outputting a warning notification to the user, outputting a signal to change the imaging conditions, and displaying the pixels in a highlighted manner.
The above warning operation can prevent the generation of a hyperspectral image 20 in which the reconstruction accuracy of the target 70 is low, in a case where the region of the target 70 within the compressed image 10 has pixels having pixel values that exceed the upper limit.
In Modification 4, regarding the reconstruction error factor (A), in a case where the pixel value(s) of one or more pixels included in the compressed image 10 exceed the upper limit and where the pixels are present within the region of the target 70 in the compressed image 10, the user is notified of the warning, but this is not the only example. Regarding the reconstruction error factor (C), in a case where there is a shadow in the comparison image 12 and where the shadow is within the region of the target 70 in the compressed image 10, the user may be notified of the warning.
The processing circuit 210 determines whether or not one or more pixels whose pixel values exceed the upper limit are present within the region of the target 70 or within the surrounding region of the target 70 in the compressed image 10. In a case where a determination of Yes is obtained, the processing circuit 210 performs the operation in Step S508. In a case where a determination of No is obtained, namely, where one or more such pixels are present outside the region of the target 70 and outside the surrounding region of the target 70 in the compressed image 10, the processing circuit 210 terminates the processing operation without generating a hyperspectral image 20. In this case, the processing circuit 210 does not perform any of the following: outputting a warning notification to the user, outputting a signal to change the imaging conditions, and displaying the pixels in a highlighted manner.
If pixels whose pixel values exceed the upper limit are absent within the region of the target 70 but are present within the surrounding region of the target 70, when a hyperspectral image 20 is generated on the basis of the compressed image 10 and the reconstruction table, the reconstruction errors caused by such pixels may propagate to the region of the target 70. As a result, the reconstruction accuracy of the region of the target 70 in the hyperspectral image 20 may decrease. Thus, in Modification 5, even in a case where pixels whose pixel values exceed the upper limit are present in the surrounding region of the target 70, the user is notified of the warning.
The surrounding region of the target 70 in the compressed image 10 is as follows. The surrounding region of the target 70 is a region surrounding the target 70. The inner perimeter of the surrounding region of the target 70 matches the outer perimeter of the region of the target 70. The outer perimeter of the surrounding region of the target 70 is separated from the inner perimeter of the surrounding region of the target 70 by a predetermined number of pixels at a minimum distance. The predetermined number of pixels may be, for example, greater than or equal to 4 pixels and less than or equal to 250 pixels. The reason why the lower limit corresponds to 4 pixels is that, as described above, the range over which such a reconstruction error propagates is about 4 pixels or is greater than or equal to 4 pixels. The reason why the upper limit corresponds to 250 pixels is to provide enough margin so that such a reconstruction error does not propagate to the region of the target 70. In actual operation, the predetermined number of pixels may be set to 200 pixels, for example.
The above warning operation can prevent the generation of a hyperspectral image 20 in which the reconstruction accuracy of the target 70 is low, in a case where the region or its surrounding region of the target 70 within the compressed image 10 has pixels whose pixel values exceed the upper limit.
In Modification 5, regarding the reconstruction error factor (A), in a case where the pixel value(s) of one or more pixels included in the compressed image 10 exceed the upper limit and where the one or more pixels are present within the region of the target 70 or the surrounding region of the target 70 in the compressed image 10, the user is notified of the warning; however, this is not the only example. Regarding the reconstruction error factor (C), in a case where there is a shadow in the comparison image 12 and where the shadow is within the region of the target 70 or the surrounding region of the target 70 in the compressed image 10, the user may be notified of the warning.
Note that the operations in Modifications 1 to 5 can be combined as desired. In Modification 1, in a case where a determination of Yes is obtained in Step S402 illustrated in
Similarly, in Modifications 2 and 3, in a case where a determination of Yes is obtained in Step S504 illustrated in
In Modifications 2 and 3, the processing circuit 210 performs the operation in Step S504 illustrated in
In Modification 5, in a case where a determination of No is obtained in Step S503 or S509 illustrated in
2.7. Application Example of Information Processing Method according to Present Embodiment
The information processing method according to the present embodiment can be used, for example, to inspect products carried by a conveyor belt. Before generating a hyperspectral image 20 of a product carried by a conveyor belt, the processing circuit 210 acquires a compressed image 10 of the product using the imaging device 100 and determines whether or not the compressed image 10 has the reconstruction error factors (A) to (C). In a case where the compressed image 10 has the reconstruction error factor (A), if the pixels corresponding to the reconstruction error factor (A) in the compressed image 10 are displayed in a highlighted manner via the display UI 320, this is easier for the user to understand. The processing circuit 210 changes the imaging conditions so as to meet the reconstruction accuracy, the analysis accuracy, or both desired by the user. An example of analysis is the inspection of the product's appearance, including its coloring. Thereafter, the processing circuit 210 regenerates a hyperspectral image 20 from the reacquired compressed image 10. The above determination may be performed before the reconstruction operation for generating a hyperspectral image 20 or in parallel with the reconstruction operation. Furthermore, the above determination may be performed at the discretion of the user or automatically by the imaging system.
The above description of the embodiments discloses the following techniques.
A method, which is an information processing method performed using a computer, including:
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed.
The method described in Technique 1, in which in a case of (1),
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed.
The method described in Technique 1 or 2, further including:
This method makes it easier for the user to recognize pixels corresponding to a reconstruction error factor.
The method described in any one of Techniques 1 to 3, further including:
This method makes it easier for the user to recognize pixels that correspond to a reconstruction error factor and are not complemented in the reconstruction operation.
The method described in any one of Techniques 1 to 4, further including:
This method makes it easier for the user to recognize not only pixels corresponding to a reconstruction error factor, but also the surrounding pixels where the reconstruction error propagates.
The method described in any one of Techniques 1 to 5, in which
This method allows the first pixel(s) to be identified on the basis of the predetermined upper and lower limits for pixel values.
The method described in any one of Techniques 1 to 6, further including:
This method allows the user to change the imaging conditions and reduce the possibility of occurrence of the reconstruction error in the reconstructed image.
The method described in any one of Techniques 1 to 7, further including:
This method allows the imaging conditions to be changed.
The method described in any one of Techniques 1 to 5, in which
This method allows the first pixel(s) to be identified based on the predetermined lower limit for the size of the outline.
The method described in any one of Techniques 1 to 5 and 9, further including:
This method allows the user to change the imaging conditions and reduce the possibility of occurrence of the reconstruction error in the reconstructed image.
The method described in any one of Techniques 1 to 5, 9, and 10, further including:
This method allows the imaging conditions to be changed.
The method described in Technique 2 and any one of Techniques 3 to 5 that directly or indirectly reference Technique 2, in which
This method allows the first pixel(s) to be identified on the basis of the above elements and the compressed image.
The method described in Technique 2 and any one of Techniques 3 to 5 and 12 that directly or indirectly reference Technique 2, further including:
This method allows the user to recognize that the optical system of the imaging device has an abnormality and that maintenance should be performed on the imaging device.
The method described in Technique 1 or 2, further including:
This method can prevent the generation of reconstructed images with low reconstruction accuracy.
The method described in Technique 1 or 2, further including:
This method can prevent the generation of reconstructed images with low reconstruction accuracy.
The method described in Technique 1 or 2, further including:
This method makes it possible to generate reconstructed images in which the reconstruction accuracies of targets are high.
The method described in Technique 1 or 2, further including:
This method can prevent the generation of reconstructed images with low reconstruction accuracy.
The method described in Technique 17, further including:
This method can prevent the generation of reconstructed images in which the reconstruction accuracies of targets are low.
An imaging system including:
This structure allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed.
The imaging system described in Technique 19, in which
This structure allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed.
A method, which is an information processing method performed using a computer, including:
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed. Furthermore, the user can change the imaging condition and reduce the possibility of occurrence of the reconstruction error in a hyperspectral image.
A method, which is an information processing method performed using a computer, including:
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed. Furthermore, the imaging condition can be changed.
A method, which is an information processing method performed using a computer, including:
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed. Furthermore, the user can change the imaging condition and reduce the possibility of occurrence of the reconstruction error in a hyperspectral image.
A method, which is an information processing method performed using a computer, including:
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed. Furthermore, the user can change the imaging condition and reduce the possibility of occurrence of the reconstruction error in a hyperspectral image.
A method, which is an information processing method performed using a computer, including:
A method, which is an information processing method performed using a computer, including:
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed.
The method described in Technique 26, further including:
This method makes it possible to generate reconstructed images in which the reconstruction accuracies of targets are high.
This method allows estimating, at a relatively low computational cost, reconstruction accuracy in a case where reconstructed images are generated from an image in which spectral information is compressed. Furthermore, the user can recognize that the optical system of the imaging device has an abnormality and that maintenance should be performed on the imaging device.
The technology according to the present disclosure is useful, for example, in cameras and measurement devices that acquire multi-wavelength or high-resolution images. The technology according to the present disclosure can be applied, for example, to sensing for biological, medical, and cosmetic applications, inspection systems for foreign matter and pesticide residues in food, remote sensing systems, and in-vehicle sensing systems.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2022-142803 | Sep 2022 | JP | national |
| 2023-119774 | Jul 2023 | JP | national |
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/JP2023/028497 | Aug 2023 | WO |
| Child | 19054990 | US |