This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2009-145790, filed on Jun. 18, 2009, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a microscope system, a specimen observation method, and a computer program product for acquiring a spectral image in which a specimen is imaged and observing the specimen by displaying the acquired spectral image.
2. Description of the Related Art
For example, in a pathological diagnosis, it is widely performed that a tissue sample obtained by organ harvesting or needle biopsy is thinly sliced to a thickness of several microns to create a specimen, and the specimen is magnified and observed by using an optical microscope to obtain various findings. Here, the specimen hardly absorbs or scatters light and is nearly clear and colorless, so that it is generally stained by dye before the observation.
While various types of staining methods are proposed, for example, so-called morphological observation staining is normally performed. The morphological observation staining is to observe morphology of the specimen, and stains cell nucleus, cytoplasm, connective tissue, and the like. According to the morphological observation staining, it is possible to grasp the size of elements constituting a tissue, the positional relationship between them, and the like, so that the state of the specimen can be determined morphologically. For example, as the morphological observation staining used in a tissue diagnosis, hematoxylin-eosin staining (hereinafter referred to as “HE staining”) that uses two types of dyes, hematoxylin and eosin, is widely known. On the other hand, in a cytological diagnosis, Papanicolaou staining (Pap staining) is typical. In this specification, staining that is normally performed in, for example, the morphological observation staining to observe a specimen is referred to as “standard staining”.
The observation of the stained specimen may be performed visually, but also may be performed by displaying the image of the specimen on a screen of a display device. For example, conventionally, it is performed that, by using a technique disclosed in Japanese Laid-open Patent Publication No. 07-120324, an image of an HE-stained specimen is captured by multiband imaging, by using a technique disclosed in Japanese Laid-open Patent Publication No. 2008-51654, an amount of dye that stains the specimen is calculated (estimated) by estimating a spectrum at a specimen position, and an RGB image to be displayed is synthesized.
Special staining which is performed along with the standard staining of the morphological observation staining is also known. The special staining is actively used for purposes such that to distinguishably stain specific structures such as an elastic fiber, a collagen fiber, and a smooth muscle included in a specimen to complement a diagnosis of a specimen on which the standard staining is performed, and to prevent abnormal findings from being overlooked. For example, Elastica van Gieson staining which selectively stains an elastic fiber or the like is performed to determine vessel invasion of cancer cells or the like. Masson trichrome staining which selectively stains a collagen fiber is performed to determine the degree of fibrosis in liver or the like. However, the special staining takes two to three days to perform the staining process, and thus there is a problem that the diagnosis cannot be performed quickly. In addition, working process of engineers increases when performing the special staining, so that there is a problem that the cost for producing a specimen increases. Thus, the special staining has been used for diagnosis in limited cases. Therefore, in diagnosis that uses a specimen on which only the standard staining is performed, diagnostic accuracy may decrease.
To solve such problems, approaches to identify a desired structure by image processing without using actual staining are proposed. For example, in Japanese Laid-open Patent Publication No. 2008-215820, a method for capturing a multi-spectrum image of a target object (specimen) to obtain spectral information of the specimen, and classifying tissue elements (structures) in the specimen on the basis of the obtained spectral information is disclosed.
A microscope system according to an aspect of the present invention includes an image acquisition unit that acquires a spectral image of a specimen by using a microscope; a structure specifying unit that specifies an extraction target structure in the specimen; a display method specifying unit that specifies a display method of the extraction target structure; a structure extraction unit that extracts an area of the extraction target structure in the spectral image by using a reference spectrum of the extraction target structure on the basis of pixel values of each pixel included in the spectral image; a display image generator that generates a display image that represents the extraction target structure in the specimen by the display method specified by the display method specifying unit on the basis of an extraction result of the structure extraction unit; and a display processing unit that performs process for displaying the display image on a display unit.
A specimen observation method according to another aspect of the present invention includes acquiring a spectral image of a specimen by using a microscope; specifying a predetermined extraction target structure in the specimen; specifying a display method of the extraction target structure; extracting an area of the extraction target structure in the spectral image by using a reference spectrum of the extraction target structure on the basis of pixel values of each pixel included in the spectral image; generating a display image that represents the extraction target structure in the specimen by the specified display method on the basis of the extraction result; and displaying the display image.
A computer program product according to still another aspect of the present invention has a computer readable medium including programmed instructions. The instructions, when executed by a computer, cause the computer to perform instructing a microscope to operate and acquiring a spectral image of a specimen; specifying a predetermined extraction target structure in the specimen; specifying a display method of the extraction target structure; extracting an area of the extraction target structure in the spectral image by using a reference spectrum of the extraction target structure on the basis of pixel values of each pixel included in the spectral image; generating a display image that represents the extraction target structure in the specimen by the specified display method on the basis of the extraction result; and displaying the display image.
The above and other features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the drawings. The present invention is not limited to the embodiments. In the drawings, the same reference numerals are given to the same components.
When observing a specimen by using a microscope, an area (visual field) that can be observed at a time is determined by a magnification of an objective lens. Here, the higher the magnification of the objective lens is, the higher the resolution of an image that can be obtained, but the smaller the visual field is. To solve this type of problem, conventionally, an operation is performed in which an image with high resolution and large visual field is generated by capturing partial images of a specimen for each portion of the specimen by using a high resolution objective lens while moving a visual field by moving an electrically driven stage or the like on which the specimen is mounted, and combining the captured partial images (for example, refer to Japanese Laid-open Patent Publication Nos. 09-281405 and 2006-343573), and a system performing the above operation is called a virtual microscope system. Hereinafter, the image with high resolution and large visual field generated by the virtual microscope system is referred to as “VS image”.
According to the virtual microscope system, an observation can be performed even when there is no actual specimen. If the generated VS image is opened so that the VS image can be viewed via a network, the specimen can be observed regardless of time and place. Therefore, the virtual microscope system is used in a field of pathological diagnosis education or a consultation between pathologists distant from each other. Hereinafter, a case in which the present invention is applied to the virtual microscope system will be described as an example.
The microscope device 2 includes an electrically driven stage 21 on which a specimen S that is an observation/diagnosis target (hereinafter, referred to as “target specimen S”) is mounted and a microscope main body 24 which has an approximately squared U shape in a side view, supports the electrically driven stage 21, and holds the objective lens 27 via a revolver 26. Also, the microscope device 2 includes a light source 28 mounted in a bottom back portion (right portion in
Here, the target specimen S mounted on the electrically driven stage 21 is a specimen on which a standard staining is performed, and in the description below, a tissue specimen on which HE staining, which is one of staining methods for morphological observation, is performed will be used as an example. Specifically, the target specimen S is a specimen in which cell nuclei are stained bluish-purple by hematoxylin (hereinafter referred to “H dye”) and cytoplasms and connective tissues are stained pale red by eosin (hereinafter referred to “E dye”). The standard staining to be applied is not limited to the HE staining. For example, the present invention can be also applied to a specimen on which another staining method for morphological observation such as a Pap staining method is performed as the standard staining.
The electrically driven stage 21 is configured to be movable in the XYZ directions. Specifically, the electrically driven stage 21 can be moved in the XY plane by a motor 221 and an XY drive controller 223 that controls the drive of the motor 221. Under a control of a microscope controller 33, the XY drive controller 223 detects a predetermined origin position in the XY plane of the electrically driven stage 21 by an XY position origin sensor not shown in
The revolver 26 is held rotatably to the microscope main body 24, and places the objective lens 27 over the target specimen S. The objective lens 27 is exchangably attached to the revolver 26 along with another objective lens having a different magnification (observation magnification), and only one objective lens 27 which is inserted in an optical path of an observation light to be used to observe the target specimen S is exclusively selected in accordance with rotation of the revolver 26. It is assumed that, in the first embodiment, the revolver 26 includes at least one objective lens with a relatively low magnification such as 2× or 4× magnification (hereinafter may be referred to as “low magnification objective lens”) and at least one objective lens with high magnification such as 10×, 20×, or 40× magnification (hereinafter may be referred to as “high magnification objective lens”) which is higher than that of the low magnification objective lens as the objective lenses 27. However, the low magnifications and the high magnifications as mentioned above are just an example, and only a magnification of one objective lens has to be higher than that of the other objective lens.
The microscope main body 24 internally includes an illumination optical system for transparently illuminating the target specimen S at a bottom portion thereof. The illumination optical system includes a collector lens 251 for collecting illumination light emitted from the light source 28, an illumination system filter unit 252, a field stop 253, an aperture stop 254, a folding mirror 255 for deflecting an optical path of the illumination light along the optical axis of the objective lens 27, a condenser optical element unit 256, and a top lens unit 257 which are arranged at appropriate positions along the optical path of the illumination light. The illumination light emitted from the light source 28 is irradiated to the target specimen S by the illumination optical system, and enters the objective lens 27 as the observation light.
The microscope main body 24 internally includes a filter unit 30 at an upper portion thereof. The filter unit 30 rotatably holds an optical filter 303 for limiting wavelength range of light formed into a specimen image within a predetermined range, and properly inserts the optical filter 303 into an optical path of the observation light in a post stage of the objective lens 27. The observation light passing through the objective lens 27 enters the lens barrel 29 via the filter unit 30.
The lens barrel 29 internally includes a beam splitter 291 for switching the optical path of the observation light passing through the filter unit 30 and guiding the optical path to the binocular unit 31 or the TV camera 32. The specimen image of the target specimen S is guided in the binocular unit 31 by the beam splitter 291 and visually observed by a microscope inspector via eyepieces 311. Or the specimen image is captured by the TV camera 32. The TV camera 32 includes an image sensor such as CCD or CMOS for forming the image of the specimen (specifically, visual field of the objective lens 27), captures the image of the specimen, and outputs image data of the image of the specimen to the host system 4.
Here, the filter unit 30 will be described in detail. The filter unit 30 is used when performing multiband imaging of the specimen image by the TV camera 32.
As described above, when performing multiband imaging of the specimen image by using the filter unit 30, the illumination light that is emitted from the light source 28 and irradiated to the target specimen S by the illumination optical system enters the objective lens 27 as the observation light. Thereafter, the light forms an image on the image sensor of the TV camera 32 via the optical filter 303a or the optical filter 303b.
When performing normal imaging (when capturing an RGB image of the specimen image), the optical filter switching unit 301 of
As shown in
Meanwhile, the host system 4 includes an input unit 41, a display unit 43, a processing unit 45, a recording unit 47, and the like.
The input unit 41 is realized by, for example, a keyboard, a mouse, a touch panel, various switches, and the like, and outputs an operation signal responding to an operational input to the processing unit 45. The display unit 43 is realized by a display device such as an LCD or an EL display, and displays various screens on the basis of a display signal inputted from the processing unit 45.
The processing unit 45 is realized by hardware such as a CPU. The processing unit 45 integrally controls operations of the entire microscope system 1 by transmitting instructions and data to each component constituting the host system 4 and transmitting instructions to the microscope controller 33 and the TV camera controller 34 to operate each component of the microscope device 2 on the basis of an input signal inputted from the input unit 41, the states of each component of the microscope device 2 inputted from the microscope controller 33, the image data inputted from the TV camera 32, a program and data recorded in the recording unit 47, and the like. For example, the processing unit 45 performs AF (Auto Focus) processing to detects a focus position (focal position) where the image is focused by evaluating the contrast of the image at each Z position on the basis of the image data inputted from the TV camera 32 while moving the electrically driven stage 21 in the Z direction. The processing unit 45 performs compression process or decompression process based on a compression method such as JPEG and JPEG2000 when recording or displaying the image data inputted from the TV camera 32 to the recording unit 47 or the display unit 43. The processing unit 45 includes a VS image generator 451 and a VS image display processing unit 454 as a display processing unit.
The VS image generator 451 obtains a low resolution image and a high resolution image of the specimen image and generates a VS image. Here, the VS image is an image in which one or more images captured by the microscope device 2 are combined and generated. Hereinafter, an image, which is generated by combining a plurality of high resolution images which are partial images of the target specimen S captured by using the high magnification objective lens, and is a wide view and high resolution multiband image covering the entire area of the target specimen S, is referred to as the VS image.
The VS image generator 451 includes a low resolution image acquisition processing unit 452 and a high resolution image acquisition processing unit 453 as an image acquisition unit and a spectrum image generator. The low resolution image acquisition processing unit 452 issues operation instructions to each component of the microscope device 2 and acquires a low resolution image of the specimen image. The high resolution image acquisition processing unit 453 issues operation instructions to each component of the microscope device 2 and acquires a high resolution image of the specimen image. Here, the low resolution image is acquired as an RGB image by using the low magnification objective lens to observe the target specimen S. On the other hand, the high resolution image is acquired as a multiband image by using the high magnification objective lens to observe the target specimen S.
The VS image display processing unit 454 extracts an area of a predetermined structure from the VS image, and performs process for displaying a display image which represents the structure in the target specimen S on the basis of the extraction result in accordance with a predetermined display method on the display unit 43. The VS image display processing unit 454 includes a structure extraction unit 455 and a display image generator 456. The structure extraction unit 455 performs image processing on the VS image, and extracts an area covering an extraction target structure (hereinafter also referred to as “extraction target structure”) specified by a user such as a pathologist from the VS image. The display image generator 456 generates a display image which represents the extraction target structure in the target specimen S appearing in the VS image using a display method specified by the user. As the display method, two types of the display methods such as “highlighted display” and “non-display” are prepared. The “highlighted display” is a display method for highlighting the area of the extraction target structure while the other areas are not highlighted. The “non-display” is a display method for not displaying the extraction target structure. In the first embodiment, a case in which the “highlighted display” is specified as the display method will be described.
The recording unit 47 is realized by a various IC memories such as a ROM including a flash memory that can be updated and a RAM, and storage media such as a hard disk and a CD-ROM that are installed inside the host system 4 or connected via a data communication terminal and reading devices thereof. In the recording unit 47, a program for operating the host system 4 and realizing various functions included in the host system 4, data used while the program is being executed, and the like are recorded.
In the recording unit 47, a VS image generation program 471 for causing the processing unit 45 to function as the VS image generator 451 and realizing VS image generation process, and a VS image display processing program 473 for causing the processing unit 45 to function as the VS image display processing unit 454 and realizing VS image display process are recorded. The recording unit 47 stores structure characteristic information 475 as a spectral characteristic recording unit. Further, a VS image file 5 is recorded in the recording unit 47. The details of the structure characteristic information 475 and the VS image file 5 will be described below.
The host system 4 can be realized by a publicly known hardware configuration including a CPU, a video board, a main storage device such as a main memory, an external storage device such as a hard disk and various storage media, a communication device, an output device such as a display device and a printing device, an input device, an interface device for connecting each unit or connecting an external input, and the like, and for example a general purpose computer such as a workstation and a personal computer can be used as the host system 4.
Next, the VS image generation process and the VS image display process according to the first embodiment will be described in this order. First, the VS image generation process will be described.
As shown in
Next, the low resolution image acquisition processing unit 452 outputs an instruction for switching the filter unit 30 to the empty hole 305 to the microscope controller 33 (step a3). Responding to this, the microscope controller 33 rotates the optical filter switching unit 301 of the filter unit 30 as necessary and places the empty hole 305 in the optical path of the observation light.
Next, the low resolution image acquisition processing unit 452 issues operation instructions to operate each component of the microscope device 2 to the microscope controller 33, and the TV camera controller 34 and acquires a low resolution image (RGB image) of the specimen image (step a5).
Responding to the operation instruction issued by the low resolution image acquisition processing unit 452 in step a5 in
As shown in
Next, the high resolution image acquisition processing unit 453 outputs an instruction for switching the objective lens 27 used to observe the target specimen S to the high magnification objective lens to the microscope controller 33 (step a9). Responding to this, the microscope controller 33 rotates the revolver 26 and places the high magnification objective lens in the optical path of the observation light.
Next, the high resolution image acquisition processing unit 453 automatically extracts and determines a specimen area 65 where the target specimen S is actually mounted in the specimen search range 61 in
Next, the high resolution image acquisition processing unit 453 cuts out an image of the specimen area (specimen area image) determined in step all from the slide specimen whole image, and selects a position at which the focal position is measured from the specimen area image to extract the position to be focused (step a13).
Next, the high resolution image acquisition processing unit 453 selects a small segment used as the position to be focused from the plurality of small segments having been formed. This is because if the focal point is measured for every small segment, the processing time increases. Therefore, for example, a predetermined number of small segments are randomly selected from the small segments. Or, the small segments used as the position to be focused may be selected in accordance with a predetermined rule, such as, a small segment used as the position to be focused is selected from every predetermined number of small segments. When the number of the small segments is small, all the small segments may be selected as the position to be focused. The high resolution image acquisition processing unit 453 calculates the center coordinates of the selected small segment in the coordinates system (x, y) of the specimen area image 7, and converts the calculated center coordinates into the coordinate system (X, Y) of the electrically driven stage 21 of the microscope device 2 to obtain the position to be focused. The coordinate conversion is performed on the basis of the magnification of the objective lens 27 used to observe the target specimen S, the number of pixels and the pixel size of the image sensor included in the TV camera 32, or the like, and for example, can be realized by applying the publicly known technique described in Japanese Laid-open Patent Publication No. 09-281405.
Next, as shown in
After measuring the focal positions at each position to be focused as described above, the high resolution image acquisition processing unit 453 creates a focus map on the basis of the measurement result of the focal positions at each position to be focused and records the focus map to the recording unit 47 (step a17). Specifically, the high resolution image acquisition processing unit 453 sets the focal positions for all the small segments by interpolating focal positions of small segments not extracted as the position to be focused in step a13 by using nearby focal positions of the position to be focused, and creates the focus map.
Next, as shown in
Responding to this, the microscope device 2 rotates the optical filter switching unit 301 of the filter unit 30, and first, sequentially captures the specimen image for each small segment of the specimen area image at the focal position thereof by the TV camera 32 while moving the electrically driven stage 21 with the optical filter 303a being placed in the optical path of the observation light. Next, the optical filter 303a is switched to the optical filter 303b and the optical filter 303b is placed in the optical path of the observation light, and thereafter the microscope device 2 captures the specimen image for each small segment of the specimen area image in the same way as described above. The image data captured here is outputted to the host system 4, and the image data is acquired by the high resolution image acquisition processing unit 453 as the high resolution image of the specimen image (specimen area segment image).
Next, the high resolution image acquisition processing unit 453 combines the specimen area segment images which are the high resolution images acquired in step a19, and generates one image covering the entire area of the specimen area 65 in
In the above steps a13 to a21, the specimen area image is divided into small segments corresponding to the visual field of the high magnification objective lens. The specimen area segment images are acquired by capturing the specimen image for each small segment, and the VS image is generated by combining the specimen area segment images. On the other hand, the small segments may be set so that the specimen area segment images next to each other partially overlap each other at the border therebetween. And, one VS image may be generated by combining the specimen area segment images so that the positional relationship between the specimen area segment images next to each other is adjusted. Specific processing can be realized by applying publicly known techniques described in Japanese Laid-open Patent Publication Nos. 09-281405 and 2006-343573, and in this case, the segment size of the small segment is set to a size smaller than the visual field of the high magnification objective lens so that edge portions of acquired specimen area segment images overlap each other between the specimen area segment images next to each other. In this way, even when the accuracy of movement control of the electrically driven stage 21 is low and the specimen area segment images next to each other are not connected continuously, a VS image in which connection portions are continuously connected by the overlapping portions can be generated.
As a result of the VS image generation process described above, a wide view and high resolution multiband image covering the entire area of the target specimen S can be acquired. Here, the processes of step a1 to step a21 is performed automatically. Therefore, a user only has to mount the target specimen S (specifically, the slide glass specimen 6 in
As shown in (b) of
The observation method 511 is an observation method of the microscope device 2 used to generate the VS image, and for example “bright field observation method” is set in the first embodiment. When a microscope device in which a specimen can be observed by another observation method such as dark field observation, fluorescence observation, differential interference observation, and the like is used, the observation method used when the VS image is generated is set.
In the slide specimen number 512, for example, a slide specimen number read from the label 63 of the slide glass specimen 6 shown in
In the standard staining information 514, a type of standard staining performed on the target specimen S is set. Specifically, although the HE staining is set in the first embodiment, the standard staining information 514 is set when a user manually inputs and specifies a type of standard staining performed on the target specimen S in a process of the VS image display process described below.
The background spectral information 515 records spectral data in the background of the target specimen S. For example, an area not including the target specimen S in the VS image acquired by performing multiband imaging of the specimen search range 61 shown in
In the VS image data 53, various information related to the VS image is set. Specifically, as shown in (a) of
In the imaging information 56, as shown in (c) of
In the imaging magnification of VS image 561, the magnification of the high magnification objective lens used when the VS image is acquired is set. The scan start position (X position) 562, the scan start position (Y position) 563, the number of pixels in the x direction 564, and the number of pixels in the y direction 565 indicate an image capturing range of the VS image. Specifically, the scan start position (X position) 562 is the X position of the scan start position of the electrically driven stage 21 when the image capturing of the specimen area segment images constituting the VS image is started, and the scan start position (Y position) 563 is the Y position from which the scan is started. The number of pixels in the x direction 564 is the number of pixels of the VS image in the x direction, the number of pixels in the y direction 565 is the number of pixels of the VS image in the y direction, and both numbers indicate the size of the VS image.
The number of planes in the Z direction 566 corresponds to the number of sectioning levels in the Z direction, and when generating the VS image as a three-dimensional image, the number of imaging planes in the Z direction is set in the number of planes in the Z direction 566. Hereinafter, “1” is set in the number of planes in the Z direction 566. The VS image is generated as a multiband image. The number of bands of the multiband image is set in the number of bands 567, and “6” is set in the first embodiment.
The focus map data 57 shown in (b) of
Next, the VS image display process will be described.
As shown in
The spin box SB111 presents a list of structures that can be specified as the extraction target structure as options, and prompts to specify an extraction target structure. The structures to be presented include a collagen fiber, an elastic fiber, a smooth muscle, and the like. However, the structures are not limited to the exemplified structures, but can be set as necessary. The spin box SB113 presents “highlight” or “non-display”as an option, and prompts to specify the display method.
The spin box SB115 presents a list of colors prepared in advance as the display color/check color as options, and prompts to specify a display color or a check color. Specifically, for example, when the “highlight” is specified in the spin box SB113 of the corresponding display method, the display color during the highlight is specified in the spin box SB115. On the other hand, when the “non-display” is specified in the spin box SB113 of the corresponding display method, the check color of the non-display is specified in the spin box SB115. Here, the check color is a display color when displaying and checking an area of the extraction target structure extracted from the VS image. Specifically, as explained in a fourth embodiment described below, when the “non-display” is specified as the display method, a display image in which the extraction target structure is not displayed is generated. However, in order to check detection error or the like, the extraction result is displayed temporarily to present to a user. When the user specifies the “non-display” in the spin box SB113, the user specifies the check color, which is the display color of the extraction target structure at this time, in the spin box SB115. The colors prepared as the display color/check color are not particularly limited, and for example, colors such as “brown”, “green”, “black”, and the like may be set as necessary.
In the structure specifying screen, a spin box SB13 for specifying the type of the standard staining that actually stains the target specimen S is arranged. The spin box SB13 presents a list of the standard staining as options, and prompts to specify the type of the standard staining. The standard staining to be presented includes, for example, HE staining and Pap staining which are morphological observation staining. However, the staining is not limited to the exemplified morphological observation staining, but can be set as necessary.
In the structure specifying screen, for example, a user specifies a desired structure as the extraction target structure in the uppermost spin box SB111, specifies the display method in the corresponding spin box SB113, and specifies the display color or the check color in the corresponding spin box SB115. When there are two or more structures desired to be extracted, they are specified in the lower spin boxes SB111, SB113, and SB115. In the spin box SB13, the standard staining performed on the target specimen S is specified. The specified type of the extraction target structure, the display method thereof, the display color/check color, and the type of the standard staining are recorded in the recording unit 47, and used in later process. In the above information, the type of the standard staining is set in the VS image file 5 as the standard staining information 514 (refer to
Next, as shown in
Here, the definition of the structure will be described. Before the definition of the structure, one or more specimens including the structure are prepared in advance, and a plurality (N) of spectral data is measured. Specifically, for example, the prepared specimens are imaged by multiband imaging in the microscope system 1. Then, for example, by selecting N pixel positions from an area covering the structure in accordance with a user operation, measurement values for each wavelength λ (pixel values for each band at the selected pixel positions) are obtained.
As a data space for defining the characteristic of the structure, an absorbance space formed by converting the measurement values for each wavelength λ measured in advance as described above into spectral absorbance is used. The spectral absorbance g(λ) is represented by the following equation (1) below when the strength of the incoming light for each wavelength λ is I0(λ), and the strength of the outgoing light (i.e. the measurement value) for each wavelength λ is I(λ). As the strength of the incoming light I0(λ), for example, the strength of the outgoing light I(λ) at the background of the specimen from which the measurement value is obtained (i.e., the pixel values for each band in the background of the multiband image obtained from the specimen) is used.
In the first embodiment, for example, a principal component analysis is performed on the spectral absorbances of the N measurement values in the absorbance space: g1(λ), g2(λ), . . . , gN(λ), and regression equations for obtaining a first principal component to a pth principal component are calculated. Here, p is the number of the bands, and p is “6” in the example of the first embodiment.
Next, the number of components k by which a cumulative contribution ratio becomes a predetermined value (for example, “0.8”) or more is determined. In the principal component analysis, a main characteristic characterizing the structure is determined by a first principal component to a kth principal component (hereinafter these are collectively and simply referred to as “principal component”). Meanwhile, the (k+1)-th principal component to the p-th principal component (hereinafter these are collectively and simply referred to as “residual component”) have a low contribution ratio when determining a characteristic of the structure.
A statistic of the residual components obtained for each measurement value as described above are calculated. For example, the sum of squares of the residual components (the (k+1)-th principal component to the p-th principal component) is calculated as the statistic. The sum of squares may be obtained after weighting each of the residual components from the (k+1)-th principal component to the p-th principal component by using predetermined weights. The statistic is not limited to the sum of squares, and different statistics may be used.
Data of each of the regression equations for obtaining the principal components (the first principal component to p-th principal component), the number of components k for determining the principal components, the residual components for each of the N measurement values (hereinafter referred to as “base residual component”), and the statistic of the base residual components for each measurement values (the statistic is the sum of squares; hereinafter referred to as “base residual component statistic”) is obtained as characteristic information related to the structure to be defined.
The above processing is performed on each of the structures, the characteristic information of all the structures that can be selected as the extraction target structure is defined, and the characteristic information is recorded in the recording unit 47 as the structure characteristic information 475.
In the structure extraction process performed in step b7 in
As shown in
Specifically, first, the structure extraction unit 455 determines whether or not the processing target pixel is the extraction target structure (step c5). In a specific processing procedure, first, the structure extraction unit 455 converts pixel values for each wavelength λ (for each band) of the processing target pixel into spectral absorbances by using equation (1) described above. At this time, as the strength of the incoming light I0(λ), the spectral data in the background of the target specimen S which is recorded as the background spectral information 515 (refer to
When the structure extraction unit 455 determines that the processing target pixel is a pixel of the extraction target structure in the manner as described above (step c7: Yes), the structure extraction unit 455 extracts the processing target pixel as an area of the extraction target structure (step c9), and thereafter ends the processing of loop A for the processing target pixel. When the structure extraction unit 455 determines that the processing target pixel is not a pixel of the extraction target structure (step c7: No), the structure extraction unit 455 ends the processing of loop A for the processing target pixel without doing anything.
When the structure extraction unit 455 has performed the processing of loop A on all the pixels included in the VS image as processing targets, the structure extraction unit 455 creates an extraction target map in which whether each pixel is the extraction target structure or not is set (step c13). Data of the extraction target map is recorded in the recording unit 47. Thereafter, the process returns to step b7 in
Although, here, all the pixels included in the VS image are determined whether or not to be the extraction target structure, only pixels in a predetermined area of the VS image may be determined whether or not to be the extraction target structure for shortening the processing time. For example, the pixels in the specimen area determined in step all in
Thereafter, as shown in
Next, the display image generator 456 refers to the extraction target map created in step c13 in
As shown in
As described above, according to the first embodiment, it is possible to highlight the area of the extraction target structure by image processing without actually performing a special staining as shown in
In the first embodiment, the residual component is obtained for each pixel in the VS image on the basis of the characteristic information defined for the extraction target structure in advance, and a pixel whose statistic of the residual component is within a predetermined range of the base residual component statistic is extracted as the extraction target structure. On the other hand, in a second embodiment, a display characteristic value (for example, saturation, brightness, and the like) is corrected when displaying the extraction target structure by the specified color on the basis of a difference between the statistic of the residual component and the base residual component statistic (hereinafter referred to as “residual difference value”). Here, the residual difference value corresponds to an accuracy of structure extraction (structure extraction accuracy) of each pixel included in the VS image.
A VS image display processing unit 454a in the processing unit 45a includes the structure extraction unit 455 and a display image generator 456a that includes a structure display characteristic correction unit 457a. The structure display characteristic correction unit 457a performs process for correcting a display characteristic of each pixel which is determined to be the extraction target structure by the structure extraction unit 455. In the recording unit 47a, a VS image display processing program 473a for causing the processing unit 45a to function as the VS image display processing unit 454a and the like are recorded.
In the second embodiment, as shown in
Next, the display image generator 456a performs display image generation process (step d9). Thereafter, in the same way as in the first embodiment, the VS image display processing unit 454a performs process for displaying the display image generated in step d9 on the display unit 43 (step b11).
Here, the display image generation process in step d9 will described. In the display image generation process, in the same manner as in the first embodiment, the display change process of the area covering the extraction target structure is performed in accordance with the specified display method, and a display image in which the extraction target structure in the target specimen S appearing in the VS image is represented by the specified display method is generated. At this time, the display characteristic value of each pixel determined to be the extraction target structure is corrected on the basis of the residual difference value obtained in the structure extraction process.
As shown in
In the second embodiment, a case in which the “highlight” is specified as the display method of the extraction target structure is considered, and the process moves to step e5 when the display method is “highlight”. Then, the display image generator 456a, first, synthesizes an RGB image from the VS image by using spectral sensitivities of each of R, G, and B bands (step e5). Next, while sequentially targeting each pixel extracted as the extraction target structure in step b7 in
In the loop B, first, the display image generator 456a replaces the pixel value of the processing target pixel by the specified display color (step e9). Next, the display image generator 456a corrects a predetermined display characteristic value of the processing target pixel by reflecting the residual difference value obtained for the processing target pixel on the display characteristic value (step ell).
For example, a look-up table (hereinafter abbreviated as “LUT”) in which a relationship between the residual difference value and a predetermined display characteristic value on which the residual difference value is reflected is defined is created in advance, and recorded in the recording unit 47a, and the display characteristic value of the processing target pixel is corrected by referring to the LUT. The display characteristic value on which the residual difference value is reflected includes, for example, saturation, brightness, and the like.
When applying the LUT of
When applying the LUT of
The LUT shown in
When the predetermined display characteristic value of the processing target pixel has been corrected in the manner described above, the display image generator 456a ends the processing of loop B for the processing target pixel. When the display image generator 456a completes the processing of loop B for all the processing target pixels that are all the pixels included in the area of the extraction target structure, the process returns to step d9 in
Even a pixel, which is determined to be the extraction target structure as a result of the threshold processing in which the structure extraction unit 455 performs threshold processing on the obtained statistic of the residual component, has different accuracies (structure extraction accuracies) whether or not the pixel is actually the extraction target structure depending on the residual component. Specifically, the smaller the residual component is, the higher the possibility to be the extraction target structure is, and the larger the residual component is, the lower the possibility to be the extraction target structure is.
According to the second embodiment, it is possible to calculate a residual difference value (a difference between the statistic of the residual component obtained for each pixel of the VS image by the structure extraction unit 455 and the base residual component statistic) for each pixel determined to be the extraction target structure. When highlighting the pixel values of the pixels of the extraction target structure by replacing the pixel values by the specified display color, it is possible to correct the pixels while the residual difference values of the pixels are reflected on the predetermined display characteristic value of each pixel. For example, it is possible to display that the smaller the residual difference value is and the higher the possibility to be the extraction target structure is, the higher the saturation and brightness of the pixel are, but the larger the residual difference value is and the lower the possibility to be the extraction target structure is, the lower the saturation and brightness of the pixel are.
Therefore, the same effects as those of the first embodiment can be produced, and the structure extraction accuracy whether or not the pixel is the extraction target structure can be visually presented for each pixel. A user can easily grasp the structure extraction accuracy whether or not the pixels are the extraction target structure by the display characteristic values (for example, the degree of brightness, the degree of color vividness, and the like) of the pixels which are extracted as the extraction target structure and highlighted with the specified display color. The user can perform observation while grasping the structure extraction accuracy whether or not the pixels are the extraction target structure. Based on this, the diagnostic accuracy can be further improved.
Although, in the second embodiment, a case is described in which saturation or brightness is corrected on the basis of the residual difference value of each pixel which is determined to be the extraction target structure, the display characteristic value on which the residual difference value is reflected is not limited to saturation and brightness. The display characteristic value on which the residual difference value is reflected needs not necessarily be one, but the residual difference value may be reflected on a plurality of display characteristic values. For example, the residual difference value may be reflected on both the saturation and the brightness to correct the saturation and the brightness.
In the first embodiment or the like, the residual component is obtained for each pixel in the VS image on the basis of the characteristic information defined for the extraction target structure in advance, and a pixel whose statistic of the residual component is within a predetermined range of the base residual component statistic is extracted as the extraction target structure. However, a characteristic of the structure may vary depending on individual difference between specimens to be observed or diagnosed. For example, the characteristic of the structure varies depending on the fixing condition for fixing the tissue in the specimen and the staining condition for staining the specimen (time required for staining, concentration of staining fluid, and the like). Therefore, a case may occur in which pixels of an area that is not the extraction target structure are erroneously extracted, and conversely, a case may occur in which pixels that are the extraction target structure are not extracted and the like. A third embodiment is to modify the extraction result of the extraction target structure in accordance with a user operation.
A VS image display processing unit 454b of the processing unit 45b includes a structure extraction unit 455b including a modification spectrum registration unit 458b as an exclusion target specifying unit, an exclusion spectrum setting unit, an additional target specifying unit, and an additional spectrum setting unit, an exclusion target extraction unit 459b as an exclusion target extraction unit, and an additional target extraction unit 460b, and a display image generator 456b. The modification spectrum registration unit 458b performs process for registering exclusion spectrum information in accordance with a user operation, or registering additional spectrum information in accordance with a user operation. The exclusion target extraction unit 459b performs process for extracting a pixel to be excluded from the area of the extraction target structure on the basis of the exclusion spectrum information. The additional target extraction unit 460b performs process for extracting a pixel to be added to the area of the extraction target structure on the basis of the additional spectrum information. In the recording unit 47b, a VS image display processing program 473b for causing the processing unit 45b to function as the VS image display processing unit 454b and the like are recorded.
In the third embodiment, as shown in
If the modification instruction operation is inputted via the input unit 41 (step f13: Yes), the modification spectrum registration unit 458b then specifies a pixel to be excluded or a pixel to be added in accordance with a user operation. For example, the modification spectrum registration unit 458b specifies a pixel to be excluded by receiving a selection operation of the pixel position of the pixel to be excluded on the display screen displayed in step b11. Or, the modification spectrum registration unit 458b specifies a pixel to be added by receiving a selection operation of the pixel position of the pixel to be added on the display screen.
When the pixel to be excluded is specified (step f15: Yes), the modification spectrum registration unit 458b reads pixel values for each band (each wavelength λ) of the specified pixel from the image data 58 (refer to
Thereafter, until the operation is fixed (step f23: No), the process returns to step f15. When the operation is fixed (step f23: Yes), first, the exclusion target extraction unit 459b extracts the pixel to be excluded from the area of the extraction target structure on the basis of the exclusion spectrum information registered in step f17, and creates an exclusion target map (step f25). Specifically, first, the exclusion target extraction unit 459b refers to the extraction target map created in the structure extraction process in step b7, and reads pixels extracted as the area of the extraction target structure. While sequentially targeting each of the read pixels, the exclusion target extraction unit 459b sequentially determines whether or not each pixel is excluded from the extraction target structure on the basis of the exclusion spectrum information.
For example, the exclusion target extraction unit 459b compares the pixel values for each band (for each wavelength λ) of the processing target pixel and the exclusion spectrum information, obtains differences between the pixel values and the exclusion spectrum information for each wavelength λ, and calculates the sum of squares of the obtained differences. The exclusion target extraction unit 459b performs threshold processing on the calculated value by using a predetermined threshold value set in advance, and for example, extracts the processing target pixel by determining that the processing target pixel is excluded from the area of the extraction target structure when the calculated value is smaller than the threshold value. Here, when a plurality of exclusion spectrum information items are registered, it is possible to select one of the exclusion spectrum information items as a representative value and determine that the processing target pixel is excluded when the sum of squares of differences between the pixel values and the representative value is smaller than the threshold value, and also it is possible to determine that the processing target pixel is excluded when the sum of squares of differences between the pixel values and all of the exclusion spectrum information items is smaller than the threshold value on the basis of the exclusion spectrum information items. The threshold value used in the threshold processing may be a predetermined fixed value, or, for example, may be a value that can be changed according to a user operation.
The exclusion target extraction unit 459b creates an exclusion target map in which the determination result indicating whether or not pixels are excluded from the area of the extraction target structure is set. In the processing here, an additional target map is created in which, in the pixel positions at which “1” is set in the extraction target map, the values at the pixel positions which are determined to be excluded from the area of the extraction target structure in the above processing are changed to “0”.
Next, the additional target extraction unit 460b extracts the pixel to be added as the area of the extraction target structure on the basis of the additional spectrum information registered in step f21, and creates an additional target map (step f27). Specifically, first, the additional target extraction unit 460b refers to the extraction target map, and reads pixels not extracted as the area of the extraction target structure. While sequentially targeting each of the read pixels, the additional target extraction unit 460b sequentially determines whether or not each pixel is added to the extraction target structure on the basis of the additional spectrum information.
For example, in the same way as the exclusion target extraction unit 459b, the additional target extraction unit 460b compares the pixel values for each band (for each wavelength λ) of the processing target pixel and the additional spectrum information, obtains differences between the pixel values and the additional spectrum information for each wavelength λ, and calculates the sum of squares of the obtained differences. The additional target extraction unit 460b performs threshold processing on the calculated value by using a predetermined threshold value set in advance, and for example, extracts the processing target pixel by determining that the processing target pixel is added as the area of the extraction target structure when the calculated value is smaller than the threshold value.
The additional target extraction unit 460b creates an additional target map in which the determination result indicating whether or not pixels are added as the area of the extraction target structure is set. In the processing here, the additional target map is created in which the values at the pixel positions which are determined to be added to the area of the extraction target structure in the above processing are changed to “1”.
Here, pixel values for each band of the pixel to be excluded which is specified in accordance with a user operation are used as the exclusion spectrum information, and the extraction result of the extraction target structure is modified by using the exclusion spectrum information. Also, pixel values for each band of the pixel to be added which is specified in accordance with a user operation are used as the additional spectrum information, and the extraction result of the extraction target structure is modified by using the additional spectrum information. On the other hand, the pixel to be excluded or the pixel to be added may be once converted into spectral absorbance, and the extraction result of the extraction target structure may be modified by performing threshold processing on the spectral absorbance in the absorbance space.
In this case, for example, the modification spectrum registration unit 458b converts pixel values for each band of a pixel specified as the pixel to be excluded or the pixel to be added into spectral absorbances on the basis of the background spectral information 515 (refer to
When the additional target map is created in the manner described above, next, the display image generator 456b modifies the extraction result of the extraction target structure on the basis of the extraction target map, the exclusion target map created in step f25, and the additional target map created in step f27, and generates a display image representing the modified extraction target structure in the target specimen S by the specified display method (step f29).
Here, (a) of
(a) of
The VS image display processing unit 454b then performs process for displaying the generated display image on the display unit 43 (step f31). Thereafter, the process moves to step f33, and until the display of the display image ends (step f33: No), the process returns to step f13. The display ends when, for example, a display end operation is inputted (step f33, Yes), and the process ends.
Next, an operation example when modifying the extraction result of the extraction target structure will be described.
Here, the display image in
For example, when a user determines that the extraction result of the extraction target structure is obviously excessive in the display image, the user performs a selection operation of pixel positions to be excluded by using the mouse included in the input unit 41. The selection operation of pixel positions may be an operation to select the pixel positions desired to be excluded one by one, or may be an operation to select an area including the pixel positions desired to be excluded. Here, the operation to select an area will be described as an example. Specifically, first, the user selects an area (for example, the area A71 enclosed by a thick line in
On the other hand, the display image in
When the “Add” is selected from the selection menu, an extraction target spectrum addition screen is displayed.
The extraction target spectrum addition screen includes an additional target pixel selection button B81 for selecting the additional target pixel on the selected partial image in the selection area display unit W81, a non-additional target pixel selection button B82 for selecting the non-additional target pixel on the selected partial image, an OK button B83 for fixing the selection operation of the additional target pixel and/or the non-additional target pixel, and a fix button 385 for fixing a selection operation of pixel position to be added.
In the extraction target spectrum addition screen, the user for example, clicks the additional target pixel selection button B81 by the mouse, and, while the selection of the additional target pixel is instructed, in the selection area display unit W81, the user clicks a pixel position desired to be the additional target pixel on the selected partial image by the mouse. A marker is placed on the clicked pixel position. The number of pixel positions to be selected may be one or more. In the example of
As internal process at this time, for example, process for binarizing the selected partial image is performed. Specifically, pixel values similar to each other in the selected partial image are extracted on the basis of the pixel values of the pixel positions at which the markers M811 to M813 are arranged, pixel values similar to each other in the selected partial image are extracted on the basis of the pixel values of the pixel positions at which the markers M821 and M822 are arranged, and process for dividing pixels in the selected partial image into additional target pixels and non-additional target pixels to binarize the pixels is performed. Then, the binarization result is displayed in the selection area display unit W81.
As a result, as shown in
When the user clicks a pixel position on the display image or the selected partial image, it is possible to read corresponding pixel value of the VS image, display the spectrum information in a graph or the like, and present it to the user so as to support the selection operation of the pixel positions to be excluded or added. Here, although the selection operation of the pixel positions to be excluded and the selection operation of the pixel positions to be added are described individually, these selection operations may be performed on the same screen. For example, on the same screen on which the display image is displayed, it is possible to receive the selection operation of the pixel positions to be excluded, the selection operation of the pixel positions to be added, and the selection operation of the pixel positions not to be excluded or added, and specify the pixels to be excluded and the pixels to be added depending on content of the received user operation.
As described above, according to the third embodiment, the same effects as those of the first embodiment can be produced, and further it is possible to modify the result of the extraction of the extraction target structure which is performed by using the characteristic information of the extraction target structure recorded as the structure characteristic information 475 in the recording unit 47b in advance as teacher data. Specifically, pixels to be excluded are specified in accordance with a user operation, and the exclusion spectrum information is registered. Then, on the basis of the exclusion spectrum information, by determining whether the pixels extracted as the area of the extraction target structure are excluded from the extraction target structure or not for each pixel, the extraction result can be modified. Or, pixels to be added are specified in accordance with a user operation, and the additional spectrum information is registered. Then, on the basis of the additional spectrum information, by determining whether the pixels in the areas other than the extraction target structure are added as the area of the extraction target structure or not for each pixel, the extraction result can be modified. Therefore, even when the extraction target structure is not properly extracted due to individual difference between the target specimens S or the like, the extraction result can be modified in accordance with a user operation, so that extraction accuracy of the extraction target structure can be improved. At this time, the exclusion target map and the additional target map can be created on the basis of the extraction target map, so that it is not necessary to perform processing on all the pixels in the VS image. Since the extraction target map is not changed, it is possible to easily cancel the selection operation of the pixel positions to be excluded or added, and restore the original state.
The method for modifying the extraction result in accordance with a user operation is not limited to the method described above. For example, it is possible to modify the extraction result by using an identification machine such as a support vector machine (SVM). Specifically, for example, when specifying an additional target and adding pixels to be the area of the extraction target structure, learning identification process, which uses the pixel values of the pixel positions (pixel positions at which the markers M811 to M813 are arranged) selected on the selected partial image displayed in the selection area display unit W81 in
Or, until the user determines that there is no pixels which are excessively extracted as the area of the extraction target structure or there is no extraction omission pixels which should have been the area of the extraction target structure, the learning identification process may be repeatedly performed while adjusting the threshold value used to determine whether or not a pixel is a pixel of the extraction target structure by, for example, a predetermined amount. The area of the extraction target structure in the VS image may be extracted by this learning identification process.
In the first embodiment or the like, a case in which the “highlight” is specified as the display method is described. On the other hand, in a fourth embodiment, a case in which the “non-display” is specified as the display method will be described. A device configuration according to the fourth embodiment can be realized by a similar configuration to the configuration of the microscope device 2 and the host system 4 according to the first embodiment, and the same reference numerals are given in the description below.
In the fourth embodiment, a structure desired not to be displayed is specified as the extraction target structure in step b1 which is shown in
Structures that are desired not to be displayed during observation/diagnosis of the VS image include, for example, neutrophil which is an inflammatory cell. This is because the neutrophil is stained by dye H and has a navy blue color in a specimen on which the HE staining is performed, and when the neutrophil is on a structure desired to be observed, the visibility of the structure to be observed/diagnosed deteriorates, and the diagnosis may be hindered.
Therefore, in the fourth embodiment, the display image generator 456 performs the non-display process as process in step b9 in
In the non-display process, as shown in
Then, the display image generator 456 performs process for displaying the check image generated in step g3 on the display unit 43 (step g5), and thereafter waits in a stand-by state until a check operation is received (step g7: No).
When the display image generator 456 receives the check operation of a user (step g7: Yes), as a spectral component amount reception unit, the display image generator 456 estimates dye amounts at a corresponding specimen position on the target specimen S on the basis of pixel values for each band for each pixel in the area of the extraction target structure in the VS image in accordance with the extraction target map (step g9).
Processing procedure will be briefly described. First, the display image generator 456 estimates a spectrum (estimated spectrum) at each corresponding specimen position on the target specimen S for each pixel on the basis of the pixel values in the VS image. For example, as described in the process in step c5 in
The estimation of dye amount can be performed by, for example, applying the publicly known technique described in Japanese Laid-open Patent Publication No. 2008-51654 mentioned in Description of the Related Art. Here, the estimation of dye amount will be briefly described. It is known that, generally, a material that transmits light follows Lambert-Beer law represented by equation (2) described below between the strength of incoming light I0(λ) for each wavelength λ, and the strength of outgoing light I(λ). k(λ) represents a value which is unique to the material and determined depending on wavelength, and d represents a depth of the material. The left-hand side of equation (2) indicates a spectral transmittance t(λ).
For example, when the specimen is stained by n types of dyes dye 1, dye 2, . . . , dye n, the following equation (3) is established for each wavelength λ by Lambert-Beer law.
k1(λ), k2(λ), . . . , kn(λ) respectively represent k(λ) corresponding to dye 1, dye 2, . . . , dye n, and for example, they are reference dye spectra of each dye which stains the specimen. d1, d2, dn represent virtual thicknesses of the dye 1, dye 2, . . . , dye n at specimen positions on the target specimen S corresponding to each image position of the multiband image. Naturally, dyes are present in a distributive manner in a specimen, so that the concept of thickness is not correct. However, the thickness can be a relative indicator representing what amount of dye is contained compared with a case in which the specimen is assumed to be stained with a single dye. In other words, it can be said that d1, d2, . . . , dn respectively represent dye amounts of the dye 1, dye 2, . . . , dye n. k1(λ), k2(λ), . . . , kn(λ) can be easily obtained from Lambert-Beer law by preparing specimens stained with each dye of dye 1, dye 2, . . . , dye n respectively in advance, and measuring spectral transmittances thereof by a spectrometer.
In the fourth embodiment, a specimen on which the HE staining is performed is used as the target specimen S, and thus, for example, hematoxylin (dye H) is assigned to the dye 1 and eosin (dye E) is assigned to the dye 2. When a specimen on which the Pap staining is performed is used as the target specimen S, a dye used in the Pap staining may be assigned. In addition to absorbing components of these dyes, in the target specimen S, there may be a tissue having an absorbing component without staining such as a red blood cell. Specifically, the red blood cell has a color unique thereto even when it is not stained, and the red blood cell is observed as the color of its own after the HE staining is performed. Or, the red blood cell is observed in a state in which the color of eosin changed in the staining process is superimposed on the color of the red blood cell itself. The absorbing component of the red blood cell is assigned to the dye 3. In the fourth embodiment, for each structure which can be specified as the extraction target structure, dye information of the extraction target structure in a specimen on which the HE staining is performed (colors in which hematoxylin and eosin are superimposed on the extraction target structure) is modeled, and the reference dye spectrum k(λ) thereof is determined in advance. The dyes modeled for the specified extraction target structures are assigned to the dye 4 and following dyes. When one extraction target structure not to be displayed is specified as described in this example, the dye information of the extraction target structure not to be displayed is assigned to the dye 4.
The dye amounts of the dyes 1 to 4 described above actually correspond to component amounts for each predetermined spectral component of a spectrum (here, estimated spectrum) at each specimen position on the target specimen S. Specifically, in the above example, a spectrum at each specimen position on the target specimen S includes four spectrum components of dye H, dye E, dye R, and the extraction target structure, the spectrum components are respectively referred to as reference dye spectra (km(λ)) of the dyes 1 to 4, and the component amounts thereof are referred to as dye amounts.
When taking the logarithm of both sides of the equation (3), the following equation (4) is obtained.
When an element corresponding to the wavelength λ of the estimated spectrum estimated for each pixel of the VS image is defined as {circumflex over (t)}(x, λ) indicating an estimated value, and this is substituted in the equation (4), the following equation (5) is obtained.
−log {circumflex over (t)}(x,λ)=k1(λ)·d1+k2(λ)·d2+ . . . +kn(λ)·dn (5)
There are n unknown variables d1, d2, . . . , dn in equation (5). Hence, when at least n simultaneous equations (5) are used for at least n different wavelengths λ, the simultaneous equations can be solved. To further improve accuracy, n or more simultaneous equations (5) may be used for n or more different wavelengths λ, and a multiple regression analysis may be performed.
The above is the simple procedure of the dye amount estimation, and n=4 in the example described above. The display image generator 456 estimates dye amounts of H dye, E dye, and the absorbing component of red blood cell fixed to corresponding specimen positions, and a dye of the extraction target structure which is not displayed on the basis of the estimated spectrum estimated for each pixel of the VS image.
As shown in
Processing procedure will be briefly described. First, the calculated dye amounts of each dye d1, d2, . . . , dn are multiplied by correction coefficients α1, α2, . . . , αn respectively, the obtained values are substituted into equation (3), and equation (6) described below is obtained. At this time, in this example, the correction coefficients αn (n=1 to 3) which are multiplied to the dyes 1 to 3 assigned to dye H, dye E, and the absorbing component of red blood cell are set to “1”, and the correction coefficient αn (n=4) which is multiplied to the dye 4 assigned to the extraction target structure that is not displayed is set to “0”, and hence a spectral transmittance t*(x, λ) targeting the dye amounts of the dyes 1 to 3 other than the dye 4 of the extraction target structure that is not displayed is obtained. When a plurality of extraction target structures that are not displayed are specified, the correction coefficients αn which are multiplied to dyes assigned to each extraction target structure are set to “0”. When a plurality of extraction target structures are specified, and the extraction target structures that are not displayed and the extraction target structures that are highlighted are specified in a mixed state, only the correction coefficients αn of the extraction target structures that are not displayed are set to “0”.
t*(x,λ)=e(k
With respect to a given point (pixel) x in a captured multiband image, a relationship of equation (7) below based on a camera response system is established between a pixel value g (x, b) in, band b and the spectral transmittance t*(x, λ) of a corresponding point on a specimen.
λ represents a wavelength, f(b, λ) represents a spectral transmittance of b-th filter, s(λ) represents a spectral sensitivity characteristic of camera, e(λ) represents a spectral radiation characteristic of illumination, and n(b) represents observation noise in band b. b is a serial number for identifying band, and here, b is an integer satisfying 1≦b≦6.
Therefore, by substituting equation (6) into equation (7) described above and obtaining pixel values, it is possible to obtain pixel values of the display image in which dye amount of dye 4 of the extraction target structure is not displayed (a display image representing a staining state of the dyes 1 to 3 except for the dye 4). In this case, the pixel values can be calculated assuming that the observation noise is zero.
When the non-display process has been performed as described above, then, the VS image display processing unit 454 performs process for displaying the generated display image on the display unit 43 in the same way as in step b11 shown in
As described above, according to the fourth embodiment, it is possible not to display the area of the extraction target structure by specifying a structure that deteriorates visibility of the structure to be observed/diagnosed and hinders the diagnosis as the extraction target structure. For example, when neutrophil is contained in the target specimen S and the neutrophil hides the structure to be observed/diagnosed to deteriorate the visibility, it is possible not to display the neutrophil by specifying “neutrophil” as the extraction target structure and specifying “non-display” as the display method thereof. Therefore, it is possible to present an image in which a structure that hinders diagnosis is excluded and the visibility of the target specimen S is improved to a user. The user can avoid overlooking an abnormal finding because the user can exclude desired structure that is unnecessary for observation/diagnosis and can observe the target specimen S with good visibility. Therefore, the diagnostic accuracy can be improved.
Also in the fourth embodiment, in the same way as in the modified example described in the first embodiment, only pixels in a predetermined area of the VS image may be determined whether or not to be the extraction target structure to shortening the processing time. For example, before extracting the extraction target structure, an RGB image to be displayed may be synthesized from the VS image and displayed, and an area selection by a user may be received. Also, the pixels in the area selected by the user using a mouse included in the input unit 41 may be determined whether or not to be the extraction target structure. Based on this, it is possible to extract an extraction target structure in which an area that is determined to have bad visibility by the user is not displayed, and the extraction target structure in this area can be not displayed.
In the fourth embodiment described above, the correction coefficient αn applied to the extraction target structure not to be displayed is set to “0”. On the other hand, the dye amount dn of the dye assigned to the extraction target structure not to be displayed may be set to “0” to generate the display image.
In the fourth embodiment described above, although a case is described in which a structure such as neutrophil which hinders observation is not displayed, it is not limited to “non-display”, but, for example, it is possible to change the color of the structure to a pale color or reduce the color density to improve visibility of the structure to be observed/diagnosed.
When changing color, a spectral characteristic of a predetermined pseudo display color is defined in advance. Then, RGB values are calculated by using the spectrum of the pseudo display color as the reference dye spectrum of the dye assigned to the extraction target structure not to be displayed. Specifically, spectrum estimation is performed by replacing the reference dye spectrum k(λ) of the dye of the extraction target structure substituted into equation (6) described above by the spectrum of the pseudo display color, and the RGB values are calculated by the estimation result.
When reducing the color density, an arbitrary value smaller than or equal to “1” may be set to the correction coefficient αn applied to the specified extraction target structure. At this time, by applying the method described in the second embodiment, the residual difference value may be obtained for each pixel in the extraction target structure. Also, the value of the correction coefficient αn may be set in accordance with the residual difference value. Specifically, on the basis of the residual difference value, the smaller the residual difference is and the higher the possibility to be the extraction target structure is, the smaller the value of the correction coefficient αn may be set to near “0” so that the color density may be reduced. On the other hand, the larger the residual component is and the lower the possibility to be the extraction target structure is, the larger the value of the correction coefficient αn may be set to near “1”.
As the types of special staining, for example, Elastica van Gieson staining, HE-alcian blue staining, Masson trichrome staining, and the like are known, and structures to be stained are different depending the type of staining. Therefore, in a fifth embodiment, a structure to be actually stained by special staining is defined to be associated with one of the special stainings in advance. A structure defined to be associated with a special staining specified in accordance with a user operation is automatically set as the extraction target structure. In the description below, two types of special staining, Elastica van Gieson staining and Masson trichrome staining, are explained.
A VS image display processing unit 454c in the processing unit 45c includes a special staining specification processing unit 461c as a staining type specifying unit, a structure extraction unit 455c, and a display image generator 456c. The special staining specification processing unit 461c specifies a type of special staining in accordance with a user operation by automatically setting a structure defined to be associated with the specified special staining as the extraction target structure. Meanwhile, in the recording unit 47c, a VS image display processing program 473c for causing the processing unit 45c to function as the VS image display processing unit 454c and the like are recorded. In the fifth embodiment, the recording unit 47c records special staining definition information 6c, which is an example of staining type definition information, as a definition information recording unit.
As shown in (b) of
On the other hand,
In the fifth embodiment, as shown in
For example, the VS image display processing unit 454c performs process for displaying a special staining specifying screen on the display unit 43 and notifying of a specification request related to the extraction target structure and the display thereof, and receives a specification operation of the special staining, the display method of the extraction target structure according to the special staining, the standard staining, and the like on the special staining specifying screen.
For example, in the spin box SB91, a user specifies a special staining which stains a structure which the user desires to observe. As a result, as internal process, process in step h3 is performed. Thus the special staining specification processing unit 461c refers to the special staining definition information 6c, and automatically sets the extraction target structure and the display color/check color. For example, when the Elastica van Gieson staining is specified, as illustrated in
For example, the user specifies the display method of each extraction target structure in the spin box SB915, and specifies the standard staining in the spin box SB93. Here, the display method of each extraction target structure is set manually. On the other hand, for example, the display method may be automatically set with the initial value being “highlighted display”. When there is an extraction target structure not necessary for the user to observe in the five types of extraction target structures that are automatically set, it is possible to manually specify the “non-display” as necessary.
Next, as shown in
Next, the display image generator 456c performs the display image generation process (step h11). The VS image display processing unit 454c performs process for displaying the display image generated in step h11 on the display unit 43 (step h13).
Here, the display image generation process in step h11 will described.
As shown in
Next, while sequentially targeting each pixel included in the VS image, the display image generator 456c performs processing of loop C on all the pixels included in the VS image (step i3 to step i13).
In the loop C, first, the display image generator 456c estimates a dye amount of each dye assigned to the processing target pixel in step i1 (step i5). Specifically, the display image generator 456c estimates the dye amount by applying the above described equations (1) to (5) in the same manner as in the fourth embodiment. At this time, the display image generator 456c refers to the special staining information 61c related to the specified special staining, reads the spectrum information 68c of each structure, and uses the spectrum information 68c as the reference dye spectrum kn(λ). For example, in the same way as in the example described above, when the dye information of dye H, dye E, the absorbing component of red blood cell, and the five types of structures defined for Elastica van Gieson staining is assigned to the dyes 1 to 8 in step i1, the display image generator 456c estimates the dye amounts of the dyes 1 to 8.
In the same manner as in the fourth embodiment, the dye amounts of the dyes 1 to 8 described above actually correspond to component amounts for each predetermined spectral component of a spectrum at each specimen position on the target specimen S. In other words, in the fifth embodiment, a spectrum at each specimen position on the target specimen S is constituted by spectral components of the structures automatically set as dye H, dye E, dye R, and the extraction target components. Each of the spectral components is respectively referred to as reference dye spectrum km(λ) of the dyes 1 to 8, and the component amount thereof is referred to as dye amount. A spectrum at each specimen position on the target specimen S may be constituted by spectral components of the structures automatically set as the extraction target components. In this case, for example, in a case of Elastica van Gieson staining, the dye information of “elastic fiber” is assigned to dye 1, the dye information of “collagen fiber” is assigned to dye 2, the dye information of “muscle fiber” is assigned to dye 3, the dye information of “cell nucleus” is assigned to dye 4, and the dye information of “cytoplasm” is assigned to dye 5.
Next, when there is an extraction target structure not to be displayed, the display image generator 456c sets the correction coefficient αn for the dye assigned to the extraction target structure to “0” on the basis of the display method specified in step h5 in
The display image generator 456c refers to the extraction target maps obtained for each extraction target structure in step h9 in
The display image generator 456c calculates the RGB value of the processing target pixel (x) by applying the above described equations (6) and (7) in the same manner as in the fourth embodiment (step ill). At this time, the display image generator 456c refers to the special staining information 61c related to the specified special staining, reads the display color/check color 66c of each structure, and uses the display color/check color 66c as the kn(λ) to replace the display color in a pseudo manner. Thereafter, the display image generator 456c ends the processing of loop C for the processing target pixel. When the display image generator 456c completes the processing of loop C for all the processing target pixels that are all the pixels included in the VS image, the process returns to step h11 in
According to the fifth embodiment, it is possible to specify a type of special staining in accordance with a user operation and automatically set a structure based on the specified special staining as the extraction target structure. Also, it is possible to estimate the dye amount of the set structure and display the structure with the display color preliminarily set for the structure, and thus an image appearing as if the specified special staining was performed on the structure can be presented to the user.
Also, in the fifth embodiment, by using the display color/check color 66c set in the special staining information 61c, in the same manner as for the non-display process in the fourth embodiment, a check image in which the area of the extraction target structure is displayed with the specified check color in advance may be displayed.
In the fifth embodiment described above, a structure is defined in accordance with a type of special staining in advance. On the other hand, a combination of structures and the display color/check color thereof may be registered in accordance with a user operation. The extraction target structure may be specified in accordance with the registered combination of structures. Based on this, by registering a desired combination of structures and the display color/check color thereof, the user can observe these structures with good visibility.
A VS image display processing unit 454d in the processing unit 45d includes a display change portion extraction unit 462d and a display image generator 456d. The display change portion extraction unit 462d specifies a display change position in the VS image in accordance with a user operation, and extracts a portion appearing at the specified display change position as a display change portion. In the recording unit 47d, a VS image display processing program 473d for causing the processing unit 45d to function as the VS image display processing unit 454d and the like are recorded.
In the sixth embodiment, as shown in
Next, the display change portion extraction unit 462d specifies a display change position in accordance with a user operation (step j5). For example, the display change portion extraction unit 462d receives a selection operation of a pixel position on the RGB image displayed in step j3, and specifies the selected pixel position as the display change position. While looking at the RGB image synthesized from the VS image, the user clicks, for example, a pixel position at which a structure desired to be highlighted appears, or clicks a pixel position at which a structure desired not to be displayed appears to specify the display change position.
Then, the display change portion extraction unit 462d reads pixel values for each band (each wavelength λ) of the pixel specified as the display change position from the image data 58 (refer to
Thereafter, until the operation is fixed (step j9: No), the process returns to step j5. When the operation is fixed (step j9: Yes), the VS image display processing unit 454d specifies the display method of the display change portion appearing at the display change position in accordance with a user operation (step j11). At this time, the VS image display processing unit 454d specifies the display color or the check color along with the display method in accordance with a user operation.
Then, the display change portion extraction unit 462d extracts an area of the display change portion from the VS image and creates an extraction target map in which a determination result indicating whether not the area is the display change portion is set by using the display change portion spectrum information registered in step j7 as a reference spectrum (teacher data) (step j13). Specifically, while sequentially targeting each pixel included in the VS image, the display change portion extraction unit 462d sequentially determines whether or not the pixel is the pixel of the display change portion. As the processing procedure, for example, the method described in the third embodiment can be applied. Specifically, first, the display change portion extraction unit 462d compares the pixel values for each band (for each wavelength λ) of the processing target pixel and extraction portion spectrum information, obtains differences between the pixel values and the extraction portion spectrum information for each wavelength λ, and calculates the sum of squares of the obtained differences. Then, the display change portion extraction unit 462d performs threshold processing on the calculated value by using a predetermined threshold value set in advance, and for example, determines that the processing target pixel is the display change portion when the calculated value is smaller than the threshold value.
When the extraction target map is created in the manner described above, next, the display image generator 456d generates a display image in which the display change portion in the target specimen S is represented by the specified display method on the basis of the extraction target map (step j15). When the specified display method is the “highlight”, the display image generator 456d applies the method described in the first embodiment, and generates the display image by replacing the pixel values of each pixel determined to be the display change portion by the specified display color. When the specified display method is the “non-display”, the display image generator 456d applies the method described in the fourth embodiment to estimate dye amounts of each dye at each specimen position on the target specimen S, and generates the display image of the VS image in which the extraction target structure is not displayed on the basis of the estimated dye amounts of the pixels.
As described above, according to the sixth embodiment, a display change position in the VS image can be specified in accordance with a user operation. On the basis of the pixel value of the specified display change position, a pixel having a similar spectrum to that of the specified display change position can be extracted from the VS image as a pixel of the display change portion appearing at the display change position. As a result, a structure appearing at the display change position can be extracted as the display change portion. Therefore, regarding a structure whose characteristic information is not defined in the structure characteristic information 475 in advance, it is possible to present an image in which the area of the structure is represented by the specified display method to a user. By specifying a pixel position at which a structure desired to be highlighted appears as the display change position, for example, and specifying “highlight” as the display method thereof, a user can easily distinguish the area of the structure (display change portion) from other areas. Or, by specifying a pixel position at which a structure desired not to be displayed as the display change position, and specifying “non-display” as the display method thereof, a user can observe the target specimen S with good visibility while eliminating the structure (display change portion) not necessary for the observation/diagnosis.
In the embodiments described above, the type of staining is specified in accordance with a user operation. On the other hand, by applying the method described in the fourth embodiment and using the technique of Japanese Laid-open Patent Publication No. 2008-51654, the dye amount of the dye which stains the target specimen S may be estimated. The type of the standard staining which stains the target specimen S may be automatically determined on the basis of the estimated dye amount. Specifically, for example, one or a plurality of pixels is selected in accordance with a user operation. Then, whether or not the dye H and the dye E are included in the dye which stains the target specimen S is determined on the basis of the estimated dye amount of the dye H and the dye E on a specimen position on the target specimen S corresponding to the selected pixel position. When the dye H and the dye E are included, it is automatically determined that the standard staining performed on the target specimen S is the HE staining. Other standard staining such as the Pap staining can be determined by the same method as described above.
The present invention is not limited to the embodiments described above, but various inventions can be formed by properly combining a plurality of constituent elements disclosed in the above embodiments. For example, the invention may be formed by removing some of the constituent elements from all the constituent elements shown in the above embodiments. Or, the invention may be formed by properly combining constituent elements shown in different embodiments.
According to the microscope system, the specimen observation method, and the computer program product of the present invention, it is possible to specify a structure in a specimen as an extraction target structure, specify a display method of the extraction target structure, and generate a display image in which the specified extraction target structure in the specimen is represented by the specified display method. Therefore, it is possible to present an image showing a desired structure in the specimen with good visibility to a user, so that diagnostic accuracy can be improved.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
05-264781 | Oct 1993 | JP | national |
2006-228269 | Aug 2006 | JP | national |
2007-049208 | Feb 2007 | JP | national |
2009-145790 | Jun 2009 | JP | national |