The present disclosure relates to a signal processing apparatus and a signal processing method.
By using spectral information of a large number of wavelength bands, for example, ten or more bands each having a narrow bandwidth, it is possible to grasp detailed physical properties of a target, which cannot be grasped from a conventional RGB image that does not have information other than three bands. Examples of a camera that acquires a such an image having a large number of wavelength bands include a “hyperspectral camera” and a “multispectral camera”. These cameras are used in various fields such as food inspection, biological tests, development of medicine, and analysis of components of minerals.
U.S. Pat. No. 9,599,511 (hereinafter referred to as Patent Literature 1) discloses a compressed sensing type hyperspectral camera. The compressed sensing is a technique for generating a larger number of data than observed data by assuming that a data distribution of an observation target is sparse in a certain space (e.g., a frequency space). Estimation computation assuming sparsity of an observation target is called “sparse reconstruction”. The hyperspectral camera disclosed in Patent Literature 1 acquires a monochromatic image through an array of filters whose spectral transmittance has a maximum value at wavelengths. The imaging device generates a hyperspectral image from the monochromatic image by computation based on sparse reconstruction.
Amicia D. Elliott et al., “Real-time hyperspectral fluorescence imaging of pancreatic b-cell dynamics with the image mapping spectrometer”, Journal of Cell Science 125, 4833-4840 (2012) (hereinafter referred to as Non Patent Literature 1) discloses an example of a snapshot hyperspectral imaging device suitable for observation of a spectrum of fluorescence emitted from a fluorescent substance.
According to the imaging device of Patent Literature 1, it is possible to take a high-resolution and multiple-wavelength moving image. However, since high-load reconstruction computation using matrix data of a size equal to a product of the number of pixels of an image sensor and the number of wavelength bands is performed, a processing circuit having high computing power is needed.
One non-limiting and exemplary embodiment provides a technique for reducing a load of reconstruction computation by efficiently generating an image of a necessary wavelength band.
In one general aspect, the techniques disclosed here feature a signal processing method executed by a computer, including: acquiring compressed image data including two-dimensional image information of a subject obtained by compressing hyperspectral information in a target wavelength region; acquiring reference spectrum data including information on one or more spectra associated with the subject; and generating, from the compressed image data, pieces of two-dimensional image data corresponding to designated wavelength bands decided on the basis of the reference spectrum data.
According to an aspect of the present disclosure, it is possible to reduce a load of reconstruction computation by efficiently generating an image of a necessary wavelength band.
It should be noted that general or specific embodiments of the present disclosure may be implemented as a system, an apparatus, a method, an integrated circuit, a computer program, a computer-readable storage medium, or any selective combination thereof. Examples of the computer-readable storage medium include a non-volatile storage medium such as a compact disc-read only memory (CD-ROM). The apparatus may include one or more apparatuses. In a case where the apparatus includes two or more apparatuses, the two or more apparatuses may be disposed in one piece of equipment or may be separately disposed in two or more separate pieces of equipment. In the specification and claims, the “apparatus” can mean not only a single apparatus, but also a system including apparatuses.
Additional benefits and advantages of the disclosed embodiments will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various embodiments and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
Each of embodiments described below illustrates a general or specific example. Numerical values, shapes, materials, constituent elements, the way in which the constituent elements are disposed and connected, positions of the constituent elements, steps, the order of steps, and the like in the embodiments below are examples and do not limit the present disclosure. Among constituent elements in the embodiments below, constituent elements that are not described in independent claims indicating highest concepts are described as optional constituent elements. Each drawing is a schematic view and is not necessarily strict illustration. In each drawing, substantially identical or similar constituent elements are given identical reference signs. Repeated description is sometimes omitted or simplified.
In the present disclosure, all or a part of any of circuit, unit, device, part or portion, or any of functional blocks in the block diagrams may be implemented as one or more of electronic circuits including a semiconductor device, a semiconductor integrated circuit (IC), or a large scale integration (LSI). The LSI or IC can be integrated into one chip, or also can be a combination of plural chips. For example, functional blocks other than a memory may be integrated into one chip. The name used here is LSI or IC, but it may also be called system LSI, very large scale integration (VLSI), or ultra large scale integration (ULSI) depending on the degree of integration. A Field Programmable Gate Array (FPGA) that can be programmed after manufacturing an LSI or a reconfigurable logic device that allows reconfiguration of the connection or setup of circuit cells inside the LSI can be used for the same purpose.
Further, it is also possible that all or a part of the functions or operations of the circuit, unit, device, part or portion are implemented by executing software. In such a case, the software is recorded on one or more non-transitory recording media such as a ROM, an optical disk or a hard disk drive, and when the software is executed by a processor, the software causes the processor together with peripheral devices to execute the functions specified in the software. A system or apparatus may include such one or more non-transitory recording media on which the software is recorded and a processor together with necessary hardware devices such as an interface.
Before description of the embodiments of the present disclosure, an outline of image reconstruction processing based on sparsity and processing of synthesizing and editing mask data used for reconstruction is described.
Sparsity is such a property that an element that characterizes an observation target is present sparsely in a certain space (e.g., a frequency space). Sparsity is widely observed in nature. By utilizing sparsity, necessary information can be efficiently observed. A sensing technique utilizing sparsity is called compressed sensing. By utilizing compressed sensing, it is possible to construct highly-efficient device and system. An example of application of compressed sensing to a hyperspectral camera is disclosed in Patent Literature 1. According to the hyperspectral camera disclosed in Patent Literature 1, a high-wavelength-resolution, high-resolution, and multiple-wavelength moving image can be taken in one shot.
An imaging device utilizing compressed sensing includes, for example, an array of optical filters having random light transmission characteristics concerning a space and/or a wavelength. Such an array of optical filters is sometimes called a “coding mask” or a “coding element”. The coding mask is disposed on an optical path of light entering an image sensor, and allows light incident from a subject to pass therethrough according to light transmission characteristics that vary from one region to another. This process using the coding mask is referred to as “coding”. The light coded by the coding mask is imaged by the image sensor. An image generated by imaging using the coding mask is hereinafter referred to as a “compressed image”. Mask data indicative of light transmission characteristics of the coding mask is recorded in advance in a storage device. A processing circuit in the imaging device performs reconstruction processing on the basis of the compressed image and the mask data. By the reconstruction processing, reconstructed images having a larger number of pieces of information concerning a wavelength than the compressed image are generated. The mask data is, for example, information indicative of a spatial distribution of spectral transmittance of the coding mask. By such reconstruction processing based on the mask data, images of respective wavelength bands can be generated from a single compressed image.
The reconstruction processing includes estimation computation assuming sparsity of an imaging target. Computation performed in sparse reconstruction can be, for example, data estimation computation by minimization of an evaluation function including a regularization term, such as discrete cosine transform (DCT), wavelet transform, Fourier transform, or total variation (TV), such as the one disclosed in Patent Literature 1. Such estimation computation is high-load computation using mask data having a size equivalent to a product of the number of pixels of the image sensor and the number of wavelength bands. Therefore, a processing circuit having high computing power is needed. In a case where such high-load computation takes a longer time than an exposure period during imaging, the computation time limits an operating speed (e.g., a frame rate) of a camera.
In some fields of use of a hyperspectral camera, there are many cases where a spectrum assumed for a subject is known such as fluorescent observation and absorption spectrum observation (see, for example, Non Patent Literature 1). In a case where a spectrum assumed for a subject is known, it is possible to reduce a computation amount by properly editing or deleting mask data used for reconstruction processing. The mask data can be, for example, edited or deleted on the basis of information on a wavelength region necessary in processing such as analysis or classification performed after image reconstruction.
Based on the above finding, the present disclosure discloses a method for, in a case where a spectrum which a subject can have is known, lessening a load of arithmetic processing by referring to information concerning the known spectrum. In the following description, a known spectrum assumed for an individual substance contained in a subject, that is, an observation target is referred to as a “reference spectrum”. Data indicative of a reference spectrum of one or more kinds of substances which an observation target can have is collectively referred to as “reference spectrum data”.
An outline of an embodiment of the present disclosure is described below.
A signal processing method according to an exemplary embodiment of the present disclosure includes acquiring compressed image data including two-dimensional image information of a subject obtained by compressing hyperspectral information in a target wavelength region; acquiring reference spectrum data including information on one or more spectra associated with the subject; and generating, from the compressed image data, pieces of two-dimensional image data corresponding to designated wavelength bands decided on the basis of the reference spectrum data.
The “hyperspectral information in a target wavelength region” refers to information concerning a spatial distribution of luminance for each of wavelength bands included in the target wavelength region. The “compressing hyperspectral information” refers to compressing information concerning a spatial distribution of luminance of wavelength bands as a single monochromatic two-dimensional image by using a coding element such as a filter array that will be described later.
According to the above method, pieces of two-dimensional image data corresponding to designated wavelength bands decided on the basis of the reference spectrum data are generated from the compressed image data. It is therefore possible to lessen a load of arithmetic processing as compared with a case where two-dimensional image data corresponding to all wavelength bands included in the target wavelength region is generated.
The one or more spectra may be associated with one or more kinds of substances assumed to be contained in the subject. For example, data that defines a correspondence relationship between a substance and a spectrum may be stored in advance in a recording medium such as a memory. By referring to such data, it is possible to, for example, easily acquire information on a spectrum corresponding to a substance designated by a user.
Each of the designated wavelength bands may include a peak wavelength of the spectrum associated with a corresponding one of the one or more kinds of substances. Since each designated wavelength band corresponds to a single substance, classification based on a reconstructed image can be made easy.
The reference spectrum data may include information on spectra associated with kinds of substances assumed to be contained in the subject. The designated wavelength bands may include a first designated wavelength band having no overlapping between the spectra and a second designated wavelength band having overlapping between the spectra. This can make it easy to perform classification based on a reconstructed image even in a case where there is overlapping between spectra of two or more substances.
In the present specification, a case where a designated wavelength band “does not have overlapping between spectra” means that one of spectra has a significant intensity and the other spectra do not have a significant intensity in the designated wavelength band. On the contrary, a case where a designated wavelength band “has overlapping between spectra” means that two or more spectra have a significant intensity in the designated wavelength band. Whether or not the spectra have a “significant intensity” can be, for example, determined on the basis of an integral value obtained in a case where an intensity of each spectrum is integrated from a lower-limit wavelength to an upper-limit wavelength of the designated wavelength band. A maximum integral value among integral values of the spectra is regarded as a signal S, and a sum of other integral values is regarded as noise N. In a case where an S/N ratio obtained by dividing the signal S by the noise N is equal to or higher than a threshold value (e.g., 1, 2, or 3), it can be determined that the designated wavelength band does not have overlapping between spectra. On the contrary, in a case where the S/N ratio is less than the threshold value, it can be determined that the designated wavelength band has overlapping between spectra.
The compressed image data may be generated by using a filter array including kinds of optical filters that are different from each other in spectral transmittance and an image sensor. The signal processing method may further include acquiring mask data reflecting a spatial distribution of the spectral transmittance. The pieces of two-dimensional image data may be generated on the basis of the compressed image data and the mask data.
The mask data may include mask matrix information having elements according to a spatial distribution of transmittance of the filter array for each of unit bands included in the target wavelength region. The signal processing method may further include generating synthesized mask information by synthesizing the mask matrix information corresponding to non-designated wavelength bands different from the designated wavelength bands in the target wavelength region; and generating synthesized image data concerning the non-designated wavelength bands on the basis of the compressed image data and the synthesized mask information. Since a data amount is reduced by synthesizing mask matrix information of the non-designated wavelength bands, a load of arithmetic processing can be further reduced.
The generating the pieces of two-dimensional image data may include generating and outputting the pieces of two-dimensional image data corresponding to the designated wavelength bands without generating, from the compressed image data, image data corresponding to a non-designated wavelength band different from the designated wavelength bands in the target wavelength region. In other words, the signal processing method need not include generating, from the compressed image data, image data corresponding to a non-designated wavelength band different from the designated wavelength band in the target wavelength region. Since reconstruction processing is not performed for a non-designated wavelength band of low importance, a computation load can be further reduced.
The designated wavelength bands may be decided on the basis of an intensity of the one or more spectra indicated by the reference spectrum data or a differential value of the intensity. For example, each designated wavelength band may be decided so as to include a peak wavelength at which an intensity of a corresponding spectrum peaks. Alternatively, each designated wavelength band may be decided so as to avoid a wavelength region in which an absolute value of a differential value of an intensity of a corresponding spectrum is close to zero. According to such a method, a designated wavelength band including a characteristic part of a spectrum can be decided, and processing such as classification after reconstruction becomes smooth.
The reference spectrum data may include information on a fluorescence spectrum of one or more substances assumed to be contained in the subject. This makes it possible to perform reconstruction processing with a proper band configuration based on a fluorescence spectrum of a fluorescent substance.
The reference spectrum data may include information on a light absorption spectrum of one or more substances assumed to be contained in the subject. This makes it possible to perform reconstruction processing with a proper band configuration based on a light absorption spectrum of a substance.
The signal processing method may further include displaying, on a display, a graphical user interface (GUI) for allowing a user to designate the one or more spectra or one or more kinds of substances associated with the one or more spectra. The display can be connected to the computer or may be mounted on the computer. The reference spectrum data may be acquired in accordance with the designated one or more spectra or the designated one or more kinds of substances. This makes it possible to perform reconstruction processing with a band configuration according to a spectrum or a substance designated by a user on the GUI.
A method according to another embodiment of the present disclosure is a method for generating mask data used for generating spectral image data for each wavelength band from compressed image data including two-dimensional image information of a subject obtained by compressing hyperspectral information in a target wavelength region. The method includes acquiring first mask data for generating first spectral image data corresponding to a first wavelength band group in the target wavelength region; acquiring reference spectrum data including information concerning at least one spectrum; deciding one or more designated wavelength regions included in the target wavelength region on the basis of the reference spectrum data; and generating second mask data for generating second spectral image data corresponding to a second wavelength band group in the one or more designated wavelength regions on the basis of the first mask data.
The first wavelength band group can be a group of all or some wavelength bands included in the target wavelength region. For example, the first wavelength band group may be a group of unit wavelength bands having a minute bandwidth included in the target wavelength region. The second wavelength band group can be a group of all or some wavelength bands included in the designated wavelength region. For example, the second wavelength band group may be a group of unit wavelength bands included in the designated wavelength region. Each of the first wavelength band group and the second wavelength band group may be a group of synthesized bands obtained by synthesizing two or more unit wavelength bands. In a case where such band synthesis is performed, mask data conversion processing is performed in accordance with a form of the band synthesis.
According to the above arrangement, second mask data for generating second spectral image data corresponding to the second wavelength band group in the designated wavelength region decided on the basis of the reference spectrum data can be generated. Since the second mask data has a smaller data size than the first mask data, reconstruction can be performed efficiently by using the second mask data.
The method for generating the second mask data may be executed by an apparatus that generates spectral image data for each wavelength band from compressed image data on the basis of the second mask data or may be executed by another apparatus connected to the apparatus. The compressed image data may be generated by using a filter array including kinds of optical filters that are different from each other in spectral transmittance and an image sensor. The first mask data and the second mask data may be data reflecting a spatial distribution of spectral transmittance of the filter array. The first mask data may include first mask information indicative of a spatial distribution of the spectral transmittance corresponding to the first wavelength band group. The second mask data may include second mask information indicative of a spatial distribution of the spectral transmittance corresponding to the second wavelength band group.
The second mask data may further include third mask information obtained by synthesizing pieces of information. Each of the pieces of information indicates a spatial distribution of the spectral transmittance in a corresponding wavelength band included in a non-designated wavelength region other than the designated wavelength region in the target wavelength region.
The second mask data need not include information concerning a spatial distribution of the spectral transmittance in a corresponding wavelength band included in a non-designated wavelength region other than the designated wavelength region.
A signal processing method according to still another embodiment of the present disclosure includes acquiring compressed image data including two-dimensional image information of a subject obtained by compressing hyperspectral information in a target wavelength region; acquiring reference spectrum data including information on one or more spectra associated with the subject; and displaying, on a display, a graphical user interface for designating a reconstruction condition for generating pieces of two-dimensional image data corresponding to designated wavelength bands from the compressed image data and an image based on the reference spectrum data.
According to the above arrangement, a user can designate a reconstruction condition for generating pieces of two-dimensional image data corresponding to designated wavelength bands. This makes it possible to efficiently generate pieces of two-dimensional image data corresponding to desired designated wavelength bands.
The present disclosure includes a computer program for causing a computer to execute each of the above methods. The present disclosure also includes a signal processing apparatus that includes a processor that executes each of the above methods and a memory in which a computer program executed by the processor is stored. The following describes embodiments of the present disclosure in more detail. The embodiments below are merely illustrative, and can be modified and changed in various ways.
First, a configuration example of an imaging system used in an exemplary embodiment of the present disclosure is described.
In
The “target wavelength region” in the present disclosure refers to a wavelength range decided by an upper-limit wavelength and a lower-limit wavelength of wavelength components included in spectral images output by the system. The target wavelength region may correspond to a wavelength region of light that can be detected by a photodetection device such as an image sensor in the system. For example, in a case of a system that performs imaging through a bandpass filter that suppresses transmission of light other than 400 nm to 700 nm, the target wavelength region can be 400 nm to 700 nm. In a case of a system that performs imaging additionally through a filter that absorbs light of 500 nm to 600 nm, the target wavelength region can be a wavelength region combining 400 nm to 500 nm and 600 nm to 700 nm that can be detected by a photodetection device.
The filter array 110 according to the present embodiment is an array of light-transmitting filters that are arranged in rows and columns. The filters include kinds of filters that are different from each other in spectral transmittance, that is, wavelength dependence of light transmittance. The filter array 110 functions as the coding mask described above, and outputs incident light after modulating an intensity of the incident light for each wavelength.
In the example illustrated in
The optical system 140 includes at least one lens. Although the optical system 140 is illustrated as a single lens in
The filter array 110 may be disposed away from the image sensor 160.
The image sensor 160 is a monochromatic photodetector that has photodetection elements (hereinafter also referred to as “pixels”) that are arranged two-dimensionally. The image sensor 160 can be, for example, a charge-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, or an infrared array sensor. Each of the photodetection elements includes, for example, a photodiode. The image sensor 160 need not necessarily be a monochromatic sensor. For example, a color-type sensor may be used. The color-type sensor includes a sensor that has a filter that allows red light to pass therethrough, a filter that allows green light to pass therethrough, and a filter that allows blue light to pass therethrough, a sensor that has a filter that allows red light to pass therethrough, a filter that allows green light to pass therethrough, a filter that allows blue light to pass therethrough, and a filter that allows an infrared ray to pass therethrough, or a sensor that has a filter that allows red light to pass therethrough, a filter that allows green light to pass therethrough, a filter that allows blue light to pass therethrough, and a filter that allows white light to pass therethrough. Use of a color-type sensor can increase an amount of information concerning a wavelength and can improve accuracy of reconstruction of the hyperspectral image 20. A wavelength range to be acquired may be any wavelength range, and is not limited to a visible wavelength range and may be a wavelength range such as an ultraviolet wavelength range, a near-infrared wavelength range, a mid-infrared wavelength range, or a far-infrared wavelength range.
The processing apparatus 200 can be a computer including one or more processors and one or more storage media such as a memory. The processing apparatus 200 generates data of the reconstructed image 20W1, the reconstructed image 20W2, . . . , and the reconstructed image 20WN on the basis of the compressed image 10 acquired by the image sensor 160.
In the example illustrated in
In the example illustrated in
As described above, light transmittance of each region varies depending on a wavelength. Therefore, the filter array 110 allows a component of a certain wavelength region of incident light to pass therethrough much and hardly allows a component of another wavelength region to pass therethrough. For example, transmittance of light of k wavelength bands among the N wavelength bands can be larger than 0.5, and transmittance of light of remaining N−k wavelength regions can be less than 0.5. k is an integer that satisfies 2≤k<N. If incident light is white light equally including all visible light wavelength components, the filter array 110 modulates, for each region, the incident light into light having discrete intensity peaks concerning wavelengths and superimposes and outputs the light of multiple wavelengths.
In the example illustrated in
A certain cell among all cells, for example, a half of all the cells may be replaced with a transparent region. Such a transparent region allows transmission of light of all of the wavelength bands W1 to WN included in the target wavelength region W at equally high transmittance, for example, transmittance of 80% or more. In such a configuration, transparent regions can be, for example, disposed in a checkerboard pattern. That is, a region in which light transmittance varies depending on a wavelength and a transparent region can be alternately arranged in two alignment directions of the regions of the filter array 110.
Data indicative of such a spatial distribution of spectral transmittance of the filter array 110 is acquired in advance on the basis of design data or actual calibration and is stored in a storage medium included in the processing apparatus 200. This data is used for arithmetic processing which will be described later.
The filter array 110 can be, for example, constituted by a multi-layer film, an organic material, a diffraction grating structure, or a microstructure containing a metal. In a case where a multi-layer film is used, for example, a dielectric multi-layer film or a multi-layer film including a metal layer can be used. In this case, the filter array 110 is formed so that at least one of a thickness, a material, and a laminating order of each multi-layer film varies from one cell to another. This can realize spectral characteristics that vary from one cell to another. Use of a multi-layer film can realize sharp rising and falling in spectral transmittance. A configuration using an organic material can be realized by varying contained pigment or dye from one cell to another or laminating different kinds of materials. A configuration using a diffraction grating structure can be realized by providing a diffraction structure having a diffraction pitch or depth that varies from one cell to another. In a case where a microstructure containing a metal is used, the filter array 110 can be produced by utilizing dispersion of light based on a plasmon effect.
Next, an example of signal processing performed by the processing apparatus 200 is described. The processing apparatus 200 generates the hyperspectral image 20 on the basis of the compressed image 10 output from the image sensor 160 and spatial distribution characteristics of transmittance for each wavelength of the filter array 110. The generated hyperspectral image 20 includes images. The images correspond to wavelength regions, and the number of wavelength regions is, for example, larger than three, which is the number of wavelength regions (e.g., a wavelength region of red light, a wavelength region of green light, and a wavelength region of blue light) acquired by a general color camera. The number of wavelength regions can be, for example, 4 to approximately 100. The number of wavelength regions is referred to as “the number of bands”. The number of bands may be larger than 100 depending on intended use.
Data to be obtained is data of the hyperspectral image 20, which is expressed as f. The data f is data including image data f1 corresponding to the wavelength band W1, the image data f2 corresponding to the wavelength band W2, . . . , and image data fN corresponding to the wavelength band WN where N is the number of bands. It is assumed here that a lateral direction of the image is an x direction and a longitudinal direction of the image is a y direction, as illustrated in
In the formula (1), each of f1, f2, . . . , and fN is data having n×m elements. Accordingly, a vector of the right side is a one-dimensional vector of n×m×N rows and 1 column. The compressed image 10 is converted into a one-dimensional vector g of n×m rows and 1 column, and is calculated. A matrix H represents conversion of performing coding and intensity modulation of components f1, f2, . . . , and fN of the vector f by using different pieces of coding information (also referred to as “mask information”) for the respective wavelength bands and adding results thus obtained. Accordingly, H is a matrix of n×m rows and n×m×N columns. The mask information may be interpreted as the matrix H in the formula (1).
It seems that when the vector g and the matrix H are given, the data f can be calculated by solving an inverse problem of the formula (1). However, since the number of elements n×m×N of the data f to be obtained is larger than the number of elements n×m of the acquired data g, this problem is an ill-posed problem and cannot be solved. In view of this, the processing apparatus 200 finds a solution by using a method of compressed sensing while utilizing redundancy of the images included in the data f. Specifically, the data f to be obtained is estimated by solving the following formula (2).
In the formula (2), f represents the estimated data f. The first term in the parentheses in the above formula represents a difference amount between an estimation result Hf and the acquired data g, that is, a residual term. Although a sum of squares is a residual term in this formula, an absolute value, a square-root of sum of squares, or the like may be a residual term. The second term in the parentheses is a regularization term or a stabilization term. The formula (2) means that f that minimizes a sum of the first term and the second term is found. The function in the parentheses in the formula (2) is called an evaluation function. The processing apparatus 200 can calculate, as the final solution f′, f that minimizes the evaluation function by convergence of solutions by recursive iterative operation.
The first term in the parentheses in the formula (2) means operation of finding a sum of squares of a difference between the acquired data g and Hf obtained by converting f in the estimation process by the matrix H. Φ(f) in the second term is a constraint condition in regularization of f and is a function reflecting sparse information of the estimated data. This function brings an effect of smoothing or stabilizing the estimated data. The regularization term can be, for example, expressed by discrete cosine transform (DCT), wavelet transform, Fourier transform, total variation (TV), or the like of f. For example, in a case where total variation is used, stable estimated data with suppressed influence of noise of the observed data g can be acquired. Sparsity of the target 70 in the space of the regularization term varies depending on texture of the target 70. A regularization term that makes the texture of the target 70 more sparse in the space of the regularization term may be selected. Alternatively, regularization terms may be included in calculation. τ is a weight coefficient. As the weight coefficient τ becomes larger, an amount of reduction of redundant data becomes larger, and a compression rate increases. As the weight coefficient τ becomes smaller, convergence to a solution becomes weaker. The weight coefficient τ is set to such a proper value that f converges to a certain extent and is not excessively compressed.
g included in the formulas (1) and (2) is sometimes expressed as g in descriptions related to the formulas (1) and (2).
Note that in the configurations of
By the above processing, images of wavelength bands can be generated from the compressed image 10 acquired by the image sensor 160. However, to generate images of all wavelength bands included in a target wavelength region, it is necessary to perform computation using a matrix including as many elements as a product of the number of pixels of the image sensor 160 and the number of wavelength bands. Since a load of this computation is high, the processing apparatus 200 needs high computing power.
There are cases where a light emission spectrum or an absorption spectrum assumed for a substance of an observation target is known such as fluorescent observation and absorption spectrum observation. In such cases, a computation amount can be reduced by editing or deleting mask data on the basis of the assumed known spectrum.
An example of a method for lessening a load of arithmetic processing by using reference spectrum data indicative of a known spectrum is described below. An example of a reference spectrum illustrated below is merely a representative example, and can be modified or changed in various ways. In the following description, for example, a system for imaging a subject containing one or more kinds of specific substances (e.g., a fluorescent substance) and analyzing or classifying the substance on the basis of an acquired image is described.
The imaging device 100 includes the image sensor 160 and a control circuit 150 that controls the image sensor 160. As illustrated in
The processing apparatus 200 includes a signal processing circuit 250 and a memory 210 such as a RAM and a ROM. The signal processing circuit 250 can be an integrated circuit including a processor such as a CPU or a GPU. The signal processing circuit 250 performs reconstruction processing based on the compressed image data output from the image sensor 160. The memory 210 stores therein a computer program executed by the processor included in the signal processing circuit 250, various kinds of data referred to by the signal processing circuit 250, and various kinds of data generated by the signal processing circuit 250. The memory 210 stores therein mask data reflecting a spatial distribution of spectral transmittance of the filter array 110 in the imaging device 100. The mask data is data including information indicative of the matrix in the above formulas (1) and (2) or information (hereinafter referred to as “mask matrix information”) for deriving the matrix. The mask matrix information can be information in a matrix format having elements according to spatial distributions of transmittance of the filter array 110 concerning respective unit bands included in the target wavelength region or a format similar to a matrix. The mask data is prepared in advance and is stored in the memory 210.
The display device 300 includes an image processing circuit 320 and a display 330. The image processing circuit 320 performs necessary processing on an image generated by the signal processing circuit 250 and then causes the image to be displayed on the display 330. The display 330 can be, for example, any display such as a liquid crystal display or an organic LED (OLED) display.
The input UI 400 includes hardware and software for setting various conditions such as an imaging condition. The input UI 400 can include an input device such as a keyboard and a mouse. The input UI 400 may be a device that enables both input and output, such as a touch screen. In this case, the touch screen may also function as the display 330. The imaging condition can include conditions such as resolution, gain, and an exposure period. The input imaging condition is sent to the control circuit 150 of the imaging device 100. The control circuit 150 causes the image sensor 160 to perform imaging in accordance with the imaging condition.
The memory 410 stores therein spectral data. The spectral data includes information on a spectrum assumed for one or more kinds of substances that can be contained in the subject. The spectral data is prepared in advance for each substance and is recorded in the memory 410. The memory 410 may be an external memory or may be incorporated into the imaging device 100. The spectral data may be, for example, acquired by being downloaded over a network such as the Internet.
The user can select specific spectral data as reference spectrum data by operating the input UI 400. For example, the user selects a specific material or substance on the input UI 400, and thereby spectral data corresponding to the material or substance can be decided as reference spectrum data. When reference spectrum data is decided by a user's operation, the reference spectrum data is sent to the signal processing circuit 250.
The signal processing circuit 250 decides a mask data synthesis condition on the basis of the reference spectrum data. The mask data synthesis condition is a condition for deciding designated wavelength bands on which reconstruction processing is performed. In other words, the signal processing circuit 250 decides designated wavelength bands on which reconstruction processing is performed on the basis of the reference spectrum data. A wavelength region constituted by the designated wavelength bands is referred to as a “designated wavelength region”. The signal processing circuit 250 may automatically decide the synthesis condition on the basis of the reference spectrum data or may decide the synthesis condition in accordance with a condition designated by using the input UI 400 by the user. The synthesis condition defines which unit bands among the unit bands included in the target wavelength region are synthesized into a single band. Each of the unit bands is a wavelength band of a narrow bandwidth included in the target wavelength region. For example, in observation of a sample containing one or more substances, unit bands included in a wavelength region of relatively low importance can be synthesized as a single band. Alternatively, unit bands included in a wavelength region assumed to represent characteristics of an individual substance the most can be synthesized as a single band. A synthesized relatively wide band is hereinafter sometimes referred to as a “synthesized band”. Furthermore, image data corresponding to the synthesized band is sometimes referred to as synthesized image data. The synthesis condition may include information on a wavelength region on which reconstruction processing is not performed. For example, a computation load can be reduced by omitting reconstruction processing as for a unit band included in a wavelength region of low importance in observation.
The signal processing circuit 250 converts the mask data into mask data of a smaller size on the basis of the decided synthesis condition and the mask data stored in the memory 210. Hereinafter, the mask data before conversion is referred to as “first mask data”, and the mask data after conversion is referred to as “second mask data”. The first mask data can be used to generate first spectral image data corresponding to a first wavelength band group in the target wavelength region. The first wavelength band group can be, for example, a group of unit bands included in the target wavelength region. The first spectral image data can be data including image information of each of the unit bands included in the first wavelength band group. The second mask data can be used to generate second spectral image data corresponding to a second wavelength band group in one or more designated wavelength regions. The second wavelength band group can be a group of unit bands included in the designated wavelength region. The second spectral image data can be data including image information of each of the unit bands included in the designated wavelength region. The first mask data can include first mask information indicative of a spatial distribution of spectral transmittance corresponding to the first wavelength band group in the filter array 110. The second mask data can include second mask information indicative of a spatial distribution of spectral transmittance corresponding to the second wavelength band group in the filter array 110. The second mask data can further include third mask information obtained by synthesizing pieces of information corresponding to one or more non-designated wavelength regions other than the designated wavelength region in the first mask data. Each of the pieces of information in the third mask information can indicate a spatial distribution of spectral transmittance in a corresponding unit wavelength band included in the non-designated wavelength region. It can be said that the third mask information is synthesized mask information obtained by synthesizing mask matrix information corresponding to the non-designated wavelength region (i.e., the non-designated wavelength bands) in the first mask data. In the present embodiment, the second mask data of a compressed size is generated by synthesizing pieces of information of unit bands included in the non-designated wavelength region in the first mask data. The signal processing circuit 250 generates pieces of two-dimensional image data corresponding to designated wavelength bands on the basis of the compressed image data and the second mask information in the second mask data. The signal processing circuit 250 further generates one or more pieces of synthesized image data corresponding to one or more non-designated wavelength bands on the basis of the compressed image data and the third mask information (i.e., synthesized mask information) in the second mask data.
Specifically, the signal processing circuit 250 compresses a size of the first mask data by processing such as averaging matrix elements corresponding to unit bands to be synthesized. The signal processing circuit 250 performs reconstruction computation corresponding to the above formula (2) by using the second mask data after conversion and the compressed image data output from the imaging device 100. The signal processing circuit 250 thus generates a reconstructed image (i.e., a spectral image) for each of the bands after the synthesis. The signal processing circuit 250 sends data of the generated reconstructed image to the image processing circuit 320. The image processing circuit 320 draws a reconstructed image of each of the bands after the synthesis on the display 330. The image processing circuit 320 may cause each reconstructed image to be displayed on the display 330 after performing processing such as deciding a layout within a screen, associating each reconstructed image with band information, or coloring corresponding to a wavelength.
Although the band synthesis condition is decided on the basis of the reference spectrum data by the signal processing circuit 250 in the present embodiment, the present disclosure is not limited to such an aspect.
Next, a detailed example of the mask data is described with reference to
For example, assume that the unit bands are a first unit band, . . . , a k-th unit band, . . . , and an N-th unit band. Image data fl corresponding to the first unit band is data of n×m rows and 1 column, . . . , image data fk corresponding to the k-th unit band is data of n×m rows and 1 column, . . . , and image data fN corresponding to the N-th unit band is data of n×m rows and 1 column. In this case, the formula (1) is expressed as follows:
Each of D1, Dk, . . . , and DN is a submatrix of the matrix H and may be a diagonal matrix of n×m rows and n×m columns. Examples of a case where each of these submatrices may be a diagonal matrix may include a case where it is determined that crosstalk between a pixel (p, q) and a pixel (r, s) of the image sensor 160 during actual calibration for acquiring information concerning the matrix H and crosstalk between the pixel (p, q) and the pixel (r, s) of the image sensor 160 at a time when an end user images the subject 70 are identical (1≤p, r≤n, 1≤q, s≤m, the pixel (p, q) the pixel (r, s)). Whether or not the condition concerning crosstalk is satisfied may be determined in consideration of an imaging environment including an optical lens and the like used during imaging or may be determined in consideration of whether or not image quality of each reconstructed image can accomplish an objective of the end user.
In a case where the filter array is irradiated with k-th light that includes light of the k-th unit band and does not include light of bands other than the k-th unit band and light output from the filter array is incident on the image sensor, data (i.e., data of a mask image) output by the image sensor is
gk′ (5)
In a case where the image sensor without the filter array is irradiated with k-th light that includes light of the k-th unit band and does not include light of bands other than the k-th unit band, the formula (4) is expressed by the following formula (6):
where fk′ is data (i.e., data of a background image) output by the image sensor.
That is, when the formula (6) is written by using the elements of the matrix, the formula (6) is expressed by the following formula (7):
Therefore, diagonal components hk (1, 1), . . . , hk (n×m, n×m) of Dk can be found by using the data of the mask image and the data of the background image. That is, all components of the matrix H become known.
fk′ (i, j) may be a pixel value of a pixel (i, j) in the background image, and
gk′(i,j) (8)
may be a pixel value of a pixel (i, j) in the mask image where 1≤i≤n and 1≤j≤m.
The information concerning the acquisition condition includes information on an exposure period and gain. The information concerning the acquisition condition need not necessarily be included in the mask data. Note that, in the example of
The mask data is, for example, data that defines the matrix H in the above formula (2). A format of the mask data can vary depending on a configuration of the system. The mask data illustrated in
In the present embodiment, the first mask data including information such as the one illustrated in
A computation amount can be reduced by deciding a wavelength region considered to be small in contribution in analysis or classification on the basis of the acquired reference spectrum data and performing reconstruction computation after synthesizing bands whose degree of contribution is small. The bands to be synthesized may be decided by user's manual input or may be automatically decided on the basis of the reference spectrum data.
In a case where the bands to be synthesized are decided manually, the input UI 400 has a function of allowing a user to select the bands to be synthesized. The input UI 400 may have a function of allowing a user to select not only the bands to be synthesized, but also a reconstruction condition such as a wavelength region on which reconstruction computation is to be performed or wavelength resolution. For example, a list of spectra corresponding to individual substances (fluorescent substances or the like) that can be contained in a subject or kinds of the individual substances can be displayed on the display 330. The user can select a combination of specific spectra from among the displayed spectra or can select a combination of specific substances from the list. Furthermore, the user can select bands to be synthesized on the basis of a spectrum of a selected substance on the UI. In this way, a UI that allows a user to select reference spectrum data and a reconstruction condition of reconstruction computation may be displayed on the display. This makes it possible to more efficiently generate an image of a wavelength band necessary for a user.
In a case where the bands to be synthesized are automatically selected, a degree of contribution of each band to analysis or classification can be automatically estimated from a kind of selected substance or a combination of selected substances. For example, the signal processing circuit 250 may synthesize bands whose degree of contribution to analysis or classification is smaller than a threshold value into a single band. A degree of contribution can be, for example, determined on the basis of a signal intensity in a spectrum, a wavelength differential value of the signal intensity, pre-training, or the like. For example, in a case where it is known that a signal intensity of a spectrum of a substance contained in a subject in a certain band is smaller than a threshold value, it can be determined that contribution of the band to a result of analysis or classification is small. In a case where an absolute value of a wavelength differential value of a signal intensity is extremely small over successive bands, that is, in a case where signal intensities in the successive bands are almost same, loss of information resulting from synthesis of the bands does not occur. Therefore, such bands can be synthesized without affecting a result of analysis or classification. The pre-training is training based on a statistical method such as principal component analysis or regression analysis. One example of the pre-training is a method of deciding, as a “wavelength region whose degree of contribution is small”, a wavelength region that is not a “wavelength region used for classification” on the basis of a reference spectrum by referring to a database in which a “wavelength region used for classification” is recorded for each substance.
where i=1, . . . , 20.
The following may be satisfied:
h′(1,1)=(h6(1,1)+ . . . +h20(1,1))/15, . . . ,h′(n×m, n×m)=(h6(n×m, n×m)+ . . . +h20(n×m, n×m)/15.
By performing the above band synthesizing processing, in a case where the bands #6 to 20 are synthesized, only computation for six bands is needed although computation for 20 bands is needed in a case where reconstruction is performed for all unit bands. Even in a case where such synthesis is performed, images of bands can be generated while maintaining wavelength resolution in the wavelength region of the bands #1 to 5. This can reduce a computation amount.
Note that the mask data conversion processing using synthesis may be performed in an environment in which the system or apparatus is used by an end user or may be performed in a site of production such as a factory in which the system or apparatus is produced. In a case where the mask data conversion processing is performed in a site of production, second mask data after conversion is stored in the memory 210 in a production process instead of or in addition to the first mask data before conversion. In this case, when used by an end user, the signal processing circuit 250 can perform reconstruction processing by using the mask data after conversion stored in advance in response to a user's input. This can further lessen a load of processing.
Next, a second embodiment is described. In Embodiment 1, a computation amount of reconstruction processing is reduced by synthesizing unit bands whose degree of contribution to analysis or classification is low into a single band. On the other hand, in the present embodiment, a signal processing circuit 250 performs band synthesis so that an image after reconstruction becomes a classification image as it is in a case where overlapping between reference spectra is considered to be small. This can further lessen a load of signal processing.
How much reference spectra overlap each other can be determined, for example, by the following method. For example, a wavelength region of wavelengths λ1 to λ2 is discussed. In a case where values of each reference spectrum are integrated from λ1 to 2, an integral value of a reference spectrum having a largest integral value is regarded as a signal S, and a sum of integral values of the other reference spectra is regarded as noise N. How much reference spectra overlap each other can be determined on the basis of a value of a signal-noise ratio (S/N ratio) obtained by dividing the signal S by the noise N. For example, in a case where the S/N ratio is lower than 1, it is determined that overlapping is large, and it can be determined that reference spectra overlap each other. On the contrary, in a case where the S/N ratio is equal to or larger than 1, it is determined that overlapping is small, and it can be determined that reference spectra do not overlap each other. Alternatively, it may be determined that reference spectra do not overlap each other in a case where the S/N ratio is equal to or larger than 2, and it may be determined that reference spectra overlap each other in a case where the S/N ratio is less than 2.
The signal processing circuit 250 in this example decides designated wavelength bands so that the designated wavelength bands do not have overlapping between reference spectra, and synthesizes unit bands included in each of the designated wavelength bands into a single band. Each of the designated wavelength bands in this example includes a peak wavelength of a spectrum associated with a corresponding one of substances.
Alternatively, the reference spectra may be displayed on an input UI 400, and a user himself or herself may decide a range of bands to be synthesized by an operation such as moving a band end. By displaying the S/N ratio on a screen, user's judgement may be assisted.
The signal processing circuit 250 in this example generates compressed second mask data by processing such as averaging elements corresponding to each designated wavelength band in first mask data. The signal processing circuit 250 generates image data corresponding to each designated wavelength band by performing computation corresponding to the above formula (2) on the basis of compressed image data and the second mask data. By using the compressed second mask data, a load of reconstruction computation can be markedly reduced.
An example of a case where overlapping between reference spectra is small is a relationship between a spectrum of excitation light and a spectrum of fluorescence in the case of fluorescence observation. According to the present embodiment, images after reconstruction can be observed as an image of excitation light and an image of fluorescence as they are.
In a case where overlapping between reference spectra is considered to be small, contents input on the input UI 400 can be used for labeling of a reconstructed image. The “labeling” as used herein refers to associating a known substance name or a sign for classification with a certain region of a reconstructed image or with a reconstructed image corresponding to a band in which a signal intensity is unbalancedly high in a certain region.
There can be various forms and methods of labeling. An example of labeling processing is described below by taking an example of a relationship between a spectrum of excitation light and a spectrum of fluorescence in the case of fluorescent observation. Assume that band synthesis is performed so that a reconstructed image of a certain band X becomes an image of excitation light and a reconstructed image of another band Y becomes an image of fluorescence. In this case, the input UI 400 can specify that the band X corresponds to excitation light and the band Y corresponds to a specific fluorescent substance on the basis of a band synthesis condition decided on the basis of reference spectrum data. The input UI 400 can decide, for example, a labeling condition for allocating a name of the fluorescent substance or a sign for classification to the reconstructed image of the band Y or a specific region of the reconstructed image. A name such as “excitation light band” or a classification sign may be allocated to a band determined as a wavelength region of excitation light. As for contents of labeling, information such as a name input on the input UI 400 by a user may be used for labeling. Labeling may be automatically performed on the basis of known physical property information.
As described above, designated wavelength bands may include one or more first designated wavelength bands that do not have overlapping between spectra and one or more second designated wavelength bands that have overlapping between spectra. In the example of
Next, a third embodiment of the present disclosure is described. In the present embodiment, a load of reconstruction computation is further reduced by excluding, from a reconstruction target, a band considered to have low importance among bands included in a target wavelength region.
Generally, reconstruction computation is performed by using information of all bands included in a target wavelength region. In a case where even any one of the bands included in the target wavelength region is not used in reconstruction computation, the relationship g=Hf in the above formula (1) is not satisfied. In this case, an optical signal of a wavelength belonging to the excluded band is allocated as noise to other bands and becomes a reconstruction error that decreases accuracy of subsequent analysis or classification. However, in a case where a spectrum of an observation target is known as in fluorescent observation, a band predicted to have zero signal intensity or an extremely small signal intensity can occur within the target wavelength region depending on a combination of observation targets. In a case where a signal intensity of a certain band included in the target wavelength region is zero or extremely small, a reconstruction error caused by excluding the band from a target of reconstruction computation is zero or extremely small. Even in a case where such a band is excluded, subsequent analysis or classification is not substantially affected. Therefore, in a case where a signal intensity emitted from an observation target in a certain band included in the target wavelength region is predicted to be zero or extremely small, the band can be excluded from the reconstruction computation. This can also be explained as follows. A formula satisfied in a case where reconstruction is performed for all bands included in a target wavelength region is g=Hf, as described above. Assume that a formula satisfied in a case where a certain band is excluded is g′=HT. Assume that H′=H−ΔH and f=f−Δf. ΔH and Δf represent elements corresponding to the excluded band. Since Δf is 0 or extremely small, g′=H′f=H′f is approximately established. Therefore, reconstruction computation expressed by g′=H′f, that is, reconstruction computation excluding the certain band is established.
Next, a fourth embodiment of the present disclosure is described. The present embodiment relates to a system for performing fluorescent observation.
Excitation light emitted from the light source 610 enters the dichroic mirror 622 after passing the interference filter 621 that selectively allows light of a specific wavelength that excites the fluorescent material to pass therethrough. The dichroic mirror 622 reflects light of a wavelength region including a wavelength of the excitation light and allows light of other wavelength regions to pass therethrough. The excitation light reflected by the dichroic minor 622 enters the sample 80 through the objective lens 623. The sample 80 that has received the excitation light emits fluorescence. The fluorescence is detected by the detector 630 after passing the dichroic minor 622 and the long pass filter 624. A part of the excitation light entering the sample 80 is reflected. Although a large part of the excitation light reflected by the sample 80 is reflected by the dichroic minor 622, a part of the excitation light travels toward the detector 630 after passing the dichroic mirror. Although the excitation light does not pass through the dichroic mirror 622 much, the excitation light typically has an intensity higher by several orders than fluorescence, and therefore when the excitation light enters the detector 630 together with fluorescence, observation of fluorescence may be hindered due to saturation of electric charges in the image sensor. To suppress occurrence of such a phenomenon, the long pass filter 624 is disposed before the detector 630, and thus the optical system 620 is constructed so that the excitation light does not enter the detector 630. Note that the long pass filter 624 is used because the excitation light has higher energy, that is, a shorter wavelength than fluorescence in fluorescent observation.
The detector 630 can be, for example, a hyperspectral camera including the imaging device 100 and the processing apparatus 200 according to Embodiment 2. The detector 630 performs reconstruction processing on the basis of mask data excluding information of an unnecessary band corresponding to the excitation light, as described above.
Characteristics of the long pass filter 624 and the dichroic mirror 622 are selected on the basis of a wavelength of fluorescence to be observed and a wavelength of excitation light used. Therefore, according to the configuration of the present embodiment in which any selected band can be excluded from reconstruction computation, observation can be performed by minimum arithmetic processing according to an observation target.
Example in which a method according to an embodiment of the present disclosure is applied to the m-FISH method, which is one method of fluorescent observation, is described.
The fluorescent in-situ hybridization (FISH) method is a method of labeling a probe having a gene sequence complementary with a specific sequence of a gene with a fluorescent pigment and specifying a place or a chromosome hybridized by fluorescence. In the multicolor FISH (m-FISH) method, probes labelled with different fluorescent pigments are concurrently used.
The m-FISH method is used for tests of some kinds of cancers such as leukemia and congenital genetic abnormalities. For example, a probe for m-FISH produced by Cambio uses five kinds of fluorescent pigments and is designed so that the five kinds of fluorescent pigments are attached to a human or mouse chromosome in an attachment ratio that varies depending on a number of the chromosome. Therefore, by dyeing a pair of chromosomes with this probe, a number of each chromosome can be specified.
In a case where there is no translocation that causes cancer and a congenital genetic abnormality, a whole of each chromosome exhibits a single fluorescence spectrum. However, in a case where there is translocation, a fluorescence spectrum varies from one part to another of a chromosome. By utilizing this property, translocation can be detected.
First imaging is described with reference to
In the present Example, the configuration of the system illustrated in
As is clear from the absorption spectra of the fluorescent pigments, fluorescence of the pigments FITC and DEAC is induced in a case where the excitation light of 405 nm is emitted. Since light of a wavelength region equal to or less than 450 nm is cut off by the dichroic minor 622 and the long pass filter 624, light of this wavelength region does not enter the detector 630. Since the absorption spectra of the fluorescent pigments Cy3, Cy3.5, and Cy5 are longer than the excitation wavelength, fluorescence is not induced from the fluorescent pigments Cy3, Cy3.5, and Cy5. Therefore, for example, there is no fluorescence at wavelengths equal to or higher than 650 nm, and a pitch-black image is output.
In view of this, it is effective to perform reconstruction, for example, while setting wavelengths 450 nm to 500 nm as a first band, setting wavelengths 500 nm to 550 nm as a second band, and setting wavelengths 550 nm to 650 nm as a third band. This makes it possible to distinguish fluorescence spectra of FITC and DEAC, which emit light on this condition, and to find a distribution of each of FITC and DEAC.
STEP 2: Excitation using Second Wavelength and Hyperspectral Imaging
Second imaging is described with reference to
In the second imaging, a wavelength of the excitation light is set to a second wavelength 633 nm, and a cutoff wavelength of the dichroic mirror 622 and the long pass filter 624 is set to 650 nm. That is, light of wavelengths equal to or higher than 650 nm enters the detector 630.
As is clear from the absorption spectra of the fluorescent pigments illustrated in
Third imaging is described with reference to
In the third imaging, a wavelength of the excitation light is set to a third wavelength 532 nm, and a cutoff wavelength of the dichroic mirror 622 and the long pass filter 624 is set to 550 nm. That is, light of wavelengths equal to or higher than 550 nm enters the detector 630.
As is clear from the absorption spectra of the fluorescent pigments illustrated in
In this example, the reconstruction bands are set as follows.
This band includes fluorescence of Cy3 and Cy3.5 as main components, and there is a possibility that fluorescence of FITC is slightly included.
This band includes fluorescence of Cy3 and Cy3.5 as main components, and there is a possibility that fluorescence of FITC is slightly included.
This band includes fluorescence of Cy3 and Cy3.5 as main components, and there is a possibility that fluorescence of FITC and Cy5 is slightly included.
This band includes fluorescence of Cy3 and Cy3.5 as main components, and there is a possibility that fluorescence of Cy5 is slightly included.
Among these components, a distribution of FITC is specified in STEP1, and a distribution of Cy5 is specified in STEP2. The fluorescence of Cy3 and Cy3.5 is included in all of the first to fourth bands. However, a fluorescence intensity ratio of these fluorescent pigments in each band is known from light emission spectra of the fluorescent pigments. Therefore, distributions of Cy3 and Cy3.5 can be found by solving a system of equations concerning an intensity or finding a pigment distribution reproducing an imaging result by simulation.
As described above, even in a case where labeling is performed by using fluorescent pigments, one or some fluorescent pigments can be caused to emit light in each measurement by restricting an excitation light spectrum. By selecting reconstruction bands in accordance with this, a distribution of each fluorescent pigment can be specified with a smaller computation amount.
A modification of an embodiment of the present disclosure may be as follows.
A signal processing method executed by a computer, including:
(b-1) receiving second pixel values,
p, q, r, and s are natural numbers, and q<r or (r+s)<q, 1≤q≤p, 1≤r≤p, and 1≤r+s≤p.
Although “n×m” is used in the formulas (1) and (2), “n×m” in the formulas (1) and (2) may be rewritten as “m×n”.
Each of the first instruction, the second instruction, and the third instruction may be given by a user by using the input UI 400.
The first instruction is an instruction to generate an image of the first subject corresponding to the first wavelength region to an image of the first subject corresponding to the p-th wavelength region.
The second instruction includes an instruction to generate an image of the second subject corresponding to the q-th wavelength region, and an instruction not to generate an image of the second subject corresponding to the r-th wavelength region to an instruction not to generate an image of the second subject corresponding to the (r+s)th wavelength region.
The third instruction includes an instruction to generate an image of the third subject corresponding to the q-th wavelength region, an instruction to generate images of the third subject corresponding to the r-th wavelength region to the (r+s)th wavelength region, and an instruction not to generate an image of the third subject corresponding to the r-th wavelength region to an instruction not to generate an image of the third subject corresponding to the (r+s) wavelength region.
The first processing includes the processing (a-1) and (a-2).
Processing (a-1)
The signal processing apparatus 250 receives the first pixel values from the image sensor 160. The second light from the first subject enters the filter array 110. In response to this entry, the filter array 110 outputs the first light. The image sensor 160 outputs the first pixel values corresponding to the first light from the filter array 110.
The first pixel values can be described in a matrix form of m rows and n columns as follows.
(i, j) may be considered as corresponding to a position of a pixel in an image where 1≤i≤m and 1≤j≤n.
The first pixel values can be described in a matrix form of m×n rows and 1 column as follows.
Processing (a-2)
The signal processing circuit 250 generates the pixel values I(11) of an image of the first subject corresponding to the first wavelength region to pixel values I(1p) of an image of the first subject corresponding to the p-th wavelength region on the basis of the first matrix recorded in the memory 210 and the first pixel values. This generation method has been already described by using the formulas (1) and (2).
Although “n×m” is used in the formulas (1) and (2), “n×m” in the formulas (1) and (2) is rewritten here as “m×n”.
The first matrix is the matrix H of m×n rows and m×n×N columns indicated by the formula (1). Here, the matrix H is expressed as H1, and can be described as follows by using a submatrix A1 to a submatrix Ap:
H1=(A1 A2 . . . Ap) (12)
Each of the submatrix A1 to the submatrix Ap may be a diagonal matrix.
The pixel values I(11) of the image of the first subject corresponding to the first wavelength region to the pixel values I(1p) of the image of the first subject corresponding to the p-th wavelength region can be described in a matrix form of m rows and n columns as follows.
(i, j) can be considered as corresponding to a position of a pixel in an image where 1≤i≤m and 1≤j≤n.
The pixel values I(11) of the image of the first subject corresponding to the first wavelength region to the pixel values I(1p) of the image of the first subject corresponding to the p-th wavelength region can be described in a matrix form of m×n rows and 1 column as follows.
This is expressed in the format of the formula (1) as follows:
The first submatrices A1, A2, . . . , and Ap include the q-th submatrix Aq.
The first submatrices A1, A2, . . . , and Ap include the second submatrices, which are the r-th submatrix Ar to the (r+s)th submatrix A(r+s).
The second processing includes the processing (b-1) and (b-2).
Processing (b-1)
The signal processing apparatus 250 receives the second pixel values from the image sensor 160. The fourth light from the second subject enters the filter array 110. In response to this entry, the filter array 110 outputs the third light. The image sensor 160 outputs the second pixel values corresponding to the third light from the filter array 110.
The second pixel values can be described in a matrix form of m rows and n columns as follows:
(i, j) can be considered as corresponding to a position of a pixel in an image where 1≤i≤m and 1≤j≤n.
The second pixel values can be described in a matrix form of m×n rows and 1 column as follows:
Processing (b-2)
The signal processing circuit 250 generates the pixel values I(2q) of the image of the second subject corresponding to the q-th wavelength region on the basis of the q-th submatrix Aq recorded in the memory 210 and the second pixel values. The signal processing circuit 250 does not generate the pixel values I(2r) of the image of the second subject corresponding to the r-th wavelength region to the pixel values I(2(r+s)) of the image of the second subject corresponding to the (r+s)th wavelength region on the basis of the second submatrices and the second pixel values.
For example, the signal processing circuit 250 may perform the following processing.
The signal processing circuit 250 deletes the r-th submatrix Ar to the (r+s)th submatrix A(r+s) from the first matrix indicated by the formula (12) recorded in the memory 210 to generate a matrix H2 of n×m rows and n×m×(p−(s+1)) columns.
H2=(A1 . . . Aq . . . A(r−1)A(r+s+1) . . . Ap) (21)
Based on the following formula (22), in accordance with the method disclosed in the description of the formulas (1) and (2),
Although “n×m” is used in the formulas (1) and (2), “n×m” in the formulas (1) and (2) is rewritten here as “m×n”.
The third processing includes the processing (c-1), (c-2), and (c-3).
Processing (c-1)
The signal processing apparatus 250 receives the third pixel values from the image sensor 160. The sixth light from the third subject enters the filter array 110. In response to this entry, the filter array 110 outputs the fifth light. The image sensor 160 outputs the third pixel values corresponding to the fifth light from the filter array 110.
The third pixel values can be described in a matrix form of m rows and n columns as follows:
(i, j) can be considered as corresponding to a position of a pixel in an image where 1≤i≤m and 1≤j≤n. The third pixel values can be described in a matrix form of m×n rows and 1 column as follows:
Processing (c-2)
The signal processing circuit 250 generates the pixel values I(3q) of the image of the third subject corresponding to the q-th wavelength region on the basis of the q-th submatrix Aq recorded in the memory 210 and the third pixel values.
Processing (c-3)
The signal processing circuit 250 generates a second matrix on the basis of second submatrices recorded in the memory 210. The signal processing circuit 250 generates the pixel values I3c of the image of the third subject corresponding to the r-th wavelength region to the (r+s)th wavelength region on the basis of the generated second matrix and the third pixel values. The signal processing circuit 250 does not generate the pixel values I(3r) of the image of the third subject corresponding to the r-th wavelength region to the pixel values I(3(r+s)) of the image of the third subject corresponding to the (r+s)th wavelength region on the basis of the second submatrices and the third pixel values.
For example, the signal processing circuit 250 may perform the following processing.
The signal processing circuit 250 generates a second matrix H3 on the basis of the r-th submatrix Ar to the (r+s)th submatrix A(r+s) included in the first matrix (see the formula (12)) recorded in the memory 210.
W(1,1)=(hr(1,1)+ . . . +h(r+s)(1,1))/(s+1),W(m×n, m×n)=(hr(m×n, m×n)+ . . . +h(r+s)(m×n, m×n))/(s+1).
The signal processing circuit 250 generates a third matrix H4 on the basis of the first matrix H1 and the second matrix H3.
H4=(A1 . . . Aq . . . A(r−1)H3A(r+s+1) . . . Ap) (26)
The third matrix H4 is a matrix of n×m rows and n×m×(p−s) columns.
Based on the formula (27), in accordance with the method disclosed in the description of the formulas (1) and (2),
Although “n×m” is used in the formulas (1) and (2), “n×m” in the formulas (1) and (2) may be rewritten as “m×n”.
The pixel values I3c of the image of the third subject corresponding to the r-th wavelength region to the (r+s)th wavelength region can be described in a matrix form of m rows and n columns as follows:
(i, j) can be considered as corresponding to a position of a pixel in an image where 1≤i≤m and 1≤j≤n.
The pixel values I3c of the image of the third subject corresponding to the r-th wavelength region to the (r+s)th wavelength region can be described in a matrix form of m×n rows and 1 column as follows:
In the present disclosure, a compressed image may be generated by an imaging method different from imaging using a filter array including optical filters.
For example, as the configuration of the imaging device 100, the image sensor 160 may be processed so that light reception characteristics of the image sensor vary from one pixel to another, and a compressed image may be generated by imaging using the image sensor 160 thus processed. That is, a compressed image may be generated by giving a function of coding incident light to an image sensor instead of coding light incident on the image sensor by the filter array 110. In this case, mask data corresponds to the light reception characteristics of the image sensor.
It is also possible to employ a configuration in which an optical element such as a metalens is introduced as at least a part of the optical system 140 so that optical characteristics of the optical system 140 are changed in terms of space and wavelength and thus incident light is coded, and a compressed image may be generated by an imaging device including this configuration. In this case, mask data is information corresponding to optical characteristics of the optical element such as a metalens. By thus using the imaging device 100 having a configuration different from the configuration using the filter array 110, an intensity of incident light may be modulated for each wavelength, and thus a compressed image and a reconstructed image may be generated.
The present disclosure is not limited to Embodiments 1 to 4, Example, and the modification. Various modifications of the above embodiments, Example, and modification which a person skilled in the art can think of and combinations of constituent elements in different embodiments, Example, and/or modifications may also be encompassed within the present disclosure without departing from the spirit of the present disclosure.
Note that the technique of the present disclosure is applicable not only to fluorescent observation, but also to other uses in which a spectrum of an observation target is known. For example, the technique of the present disclosure is applicable to various uses such as observation of an absorption spectrum, observation of black-body radiation (e.g., temperature estimation), and estimation of a light source (e.g., an LED, a halogen lamp).
The technique of the present disclosure is useful, for example, for a camera and a measurement device that acquires a multiple-wavelength image. The technique of the present disclosure is, for example, applicable to fluorescent observation, observation of an absorption spectrum, sensing for a biological, medical, or cosmetic purpose, a food foreign substance or residual pesticide test system, a remote sensing system, and an on-vehicle sensing system.
Number | Date | Country | Kind |
---|---|---|---|
2021-112254 | Jul 2021 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/025013 | Jun 2022 | US |
Child | 18540962 | US |