The present invention relates to an image processing apparatus, an image processing system, a method for processing an image, and a program which process virtual slide images.
Attention is being given to virtual slide systems in which virtual slide images can be obtained by capturing images of the specimen on a preparation by using a digital microscope, and in which these virtual slide images can be displayed on a monitor so as to be observed (see PTL 1).
In addition, it is known that Z-stack image data (depth images) including multiple layer images is used as virtual slide images (see PTL 2).
When a structure in an observation image displayed on a monitor is not in focus, a user can focus the structure by changing the Z position (depth) for virtual slide images.
However, there is a problem in that a user cannot intuitively find in which direction the depth for the virtual slide images is to be changed.
PTL 1: Japanese Patent Laid-Open No. 2011-118107
PTL 2: Japanese Patent Laid-Open No. 2011-204243
The present invention provides an image processing apparatus which processes virtual slide images in such a manner that a user can intuitively and easily find in which direction the depth for the virtual slide images is to be changed.
An image processing apparatus according to an aspect of the present invention includes an image data acquisition unit, a display control unit, an area-information acquisition unit, and a detection unit. The image data acquisition unit acquires Z-stack image data including multiple layer images obtained by using a microscope apparatus. The display control unit displays at least one of the multiple layer images on a display apparatus as an observation image. The area-information acquisition unit acquires information about a target area in the observation image specified by a user. The detection unit detects in-focus information for a corresponding area in each of the multiple layer images, and the corresponding area corresponds to the target area. The display control unit displays an image indicating a positional relationship between the target area and the corresponding area which is closer to an in-focus state than the target area, along with the target area on the display apparatus on the basis of the detection result from the detection unit.
The image processing apparatus according to the aspect of the present invention can process virtual slide images in such a manner that a user can intuitively and easily find in which direction the depth for the virtual slide images is to be changed.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Embodiments of the present invention will be described below with reference to the drawings.
The image pickup apparatus 101 is a microscope apparatus (virtual slide apparatus) that has a function of capturing multiple two-dimensional images at different focal positions in the optical-axis direction and outputting digital images. A solid-state image sensing element, such as a charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS), is used to obtain a two-dimensional image. Instead of a virtual slide apparatus, the image pickup apparatus 101 may include a digital microscope apparatus in which a digital camera is attached to an eyepiece portion of a typical optical microscope.
The image processing apparatus 102 generates multiple observation images, each of which has a desired focal position and a desired depth of field, from multiple layer images obtained from the image pickup apparatus 101, and displays them on the display apparatus 103 so as to aid in microscope observation performed by a user. The image processing apparatus 102 has, as main functions, an image data acquisition function of acquiring Z-stack image data, an image generation function of generating observation images from the Z-stack image data, and a display control function of displaying the observation images on the display apparatus 103. The image processing apparatus 102 according to the first embodiment also has an area-information acquisition function of acquiring information about target areas specified by a user, a detection function of detecting in-focus information for the target areas, a priority-assigning function of assigning priorities to image data, and a storage function of storing the image data in a storage device. The image processing apparatus 102 is constituted by a general-purpose computer or workstation which includes hardware resources, such as a central processing unit (CPU), a random-access memory (RAM), a storage device, an operation unit, and an I/F. The storage device is a mass information storage device such as a hard disk drive, and stores, for example, programs, data, and an operating system (OS) for achieving processes described below. The above-described functions are achieved with the CPU loading necessary programs and data from the storage device onto the RAM and executing the programs. The operation unit is constituted by, for example, a keyboard and a mouse, and is used by an operator to input various instructions. The display apparatus 103 is a monitor that displays the multiple two-dimensional images which are the results of computation performed by the image processing apparatus 102, and is constituted by, for example, a cathode-ray tube (CRT) or a liquid crystal display.
In the example in
The lighting unit 201 is a unit which uniformly irradiates a slide 206, which is located on the stage 202, with light, and includes a light source, an illumination optical system, and a control system for driving the light source. The stage 202 is driven and controlled by the stage control unit 205, and can be moved in the three XYZ axes. It is assumed that the optical-axis direction is the Z direction. The slide 206 is a member in which a slice of tissue or a smear cell which serves as an observation object is put onto the slide glass so as to be held together with a mounting agent under the cover glass.
The stage control unit 205 includes a drive control system 203 and a stage driving mechanism 204. The drive control system 203 receives an instruction from the main control system 218, and controls driving of the stage 202. The moving direction, the moving amount, and the like of the stage 202 are determined on the basis of the position information and the thickness information (distance information) of a specimen which are measured by the pre-measurement unit 217, and on the basis of an instruction from a user. The stage driving mechanism 204 drives the stage 202 in accordance with an instruction from the drive control system 203.
The imaging optical system 207 is a lens unit for forming an optical image of a specimen on the slide 206 onto an imaging sensor 208.
The image pickup unit 210 includes the imaging sensor 208 and an analog front end (AFE) 209. The imaging sensor 208 is a one-dimensional or two-dimensional image sensor which converts a two-dimensional optical image into an electrical physical quantity through photoelectric conversion, and, for example, a CCD or a CMOS is used. When a one-dimensional sensor is used, a two-dimensional image is obtained by performing scanning in a scanning direction. An electric signal having a voltage value according to light intensity is output from the imaging sensor 208. In the case where a color image is desired as a captured image, for example, a single-chip image sensor to which a color filter using a Bayer array is attached may be used.
The AFE 209 is a circuit that converts an analog signal which is output from the imaging sensor 208 into a digital signal. The AFE 209 includes a horizontal/vertical (H/V) driver, a correlated double sampling circuit (CDS), an amplifier, an analog-to-digital (AD) converter, and a timing generator, which are described below. The H/V driver converts a vertical synchronizing signal and a horizontal synchronizing signal for driving the imaging sensor 208 into a potential which is necessary to drive the sensor. The CDS is a correlated double sampling circuit which removes fixed-pattern noise. The amplifier is an analog amplifier which adjusts a gain of an analog signal which has been subjected to noise reduction in the CDS. The AD converter converts an analog signal into a digital signal. In the case where an output from the final stage of the system is 8-bit, the AD converter converts an analog signal into digital data obtained through approximately 10-bit to 16-bit quantization, with consideration of downstream processes, and outputs the digital data. The converted sensor output data is called RAW data. The RAW data is subjected to a development process in the development processing unit 216 which is located downstream. The timing generator generates a signal for adjusting timing for the imaging sensor 208 and timing for the development processing unit 216 which is located downstream.
In the case where a CCD is used as the imaging sensor 208, the above-described AFE 209 is necessary. In contrast, in the case where a CMOS image sensor which can output a digital output is used, the sensor includes the function of the above-described AFE 209. In addition, an image pickup controller (not illustrated) which controls the imaging sensor 208 is present, and controls not only the operations of the imaging sensor 208 but also operation timing, such as a shutter speed, a frame rate, and a region of interest (ROI).
The development processing unit 216 includes a black correction unit 211, a white balance adjustment unit 212, a demosaicing unit 213, a filtering unit 214, and a gamma correction unit 215. The black correction unit 211 subtracts data for black correction obtained with light being shielded, from each of the pixels of the RAW data. The white balance adjustment unit 212 adjusts a gain of each of the RGB colors in accordance with the color temperature of light from the lighting unit 201 so as to reproduce desired white. Specifically, data for white balance correction is added to the RAW data after the black correction. In the case where a monochrome image is handled, the white balance adjustment process is not necessary.
The demosaicing unit 213 generates image data for each of the RGB colors from the RAW data according to the Bayer array. The demosaicing unit 213 calculates RGB-color values of a target pixel through interpolation using values of the surrounding pixels (including pixels of the same color and pixels of other colors) in the RAW data. In addition, the demosaicing unit 213 performs a correction process (complement process) on a defective pixel. In the case where the imaging sensor 208 has no color filters and where a monochrome image is obtained, the demosaicing process is not necessary.
The filtering unit 214 is a digital filter which achieves suppression of high-frequency components included in an image, noise reduction, and emphasis of high resolution. The gamma correction unit 215 adds the inverse of gradation expression characteristics of a typical display device to an image, and performs gradation conversion in accordance with the visual property of a man through gradation compression in a high-luminance portion or dark processing. According to the first embodiment, to obtain an image for morphological observation, an image is subjected to gradation conversion which is adequate for a synthesizing process and a display process which are located downstream.
A typical development process includes color space conversion for converting a RGB signal into a luminance/chrominance signal, such as YCC, and compression of large-volume image data. According to the first embodiment, RGB data is directly used, and data compression is not performed.
A lens unit included in the imaging optical system 207 exerts an influence so as to reduce the light quantity in a surrounding portion in the image pickup area. To correct such reduction, the development processing unit 216 may include a function of correcting reduction in light in a surrounding portion. Similarly, the development processing unit 216 may include correction functions for various types of optical systems, such as distortion correction for correcting a position shift of the formed image, and lateral chromatic aberration correction for correcting the difference in the sizes of images for each color, among various aberrations which occur in the imaging optical system 207.
The pre-measurement unit 217 performs pre-measurement for calculating position information of the specimen on the slide 206, distance information to the desired focal position, and parameters for light-quantity adjustment caused by the thickness of the specimen. The pre-measurement unit 217 obtains information before the main measurement, enabling images to be efficiently captured. In addition, the start position, the end position, and intervals at which multiple images are captured are specified on the basis of information generated by the pre-measurement unit 217.
The main control system 218 controls various units described above. The functions of the main control system 218 and the development processing unit 216 are achieved by a control circuit having a CPU, a ROM, and a RAM. That is, the ROM stores programs and data, and the CPU uses the RAM as a work memory so as to execute the programs, achieving the functions of the main control system 218 and the development processing unit 216. A device, such as an electrically erasable programmable ROM (EEPROM) or a flash memory, is used as the ROM, and a dynamic random access memory (DRAM) device using, for example, double data rate 3 (DDR3) is used as the RAM.
The external interface 219 is an interface for transmitting a RGB color image generated by the development processing unit 216 to the image processing apparatus 102. The image pickup apparatus 101 and the image processing apparatus 102 are connected to each other through an optical communications cable. Alternatively, an interface, such as a Universal Serial Bus (USB) or a GigabitEthernet (registered trademark) is used.
The process flow for capturing an image in the main measurement will be briefly described. The stage control unit 205 determines a position at which an image is to be captured for the specimen on the stage 202 on the basis of the information obtained through the pre-measurement. Light emitted from the lighting unit 201 penetrates the specimen, and an image is formed through the imaging optical system 207 onto the image pickup surface of the imaging sensor 208. The AFE 209 converts the output signal from the imaging sensor 208 into a digital image (RAW data) which is converted into two-dimensional image of RGB by the development processing unit 216. The two-dimensional image thus obtained is transmitted to the image processing apparatus 102.
The above-described configuration and processes enable a two-dimensional image of a specimen to be captured at a certain focal position. While the stage control unit 205 shifts the focal position in the optical-axis direction (Z direction), the above-described image pickup process is repeated, whereby multiple two-dimensional images are captured at different focal positions. Herein, each of the two-dimensional images obtained through the image pickup process in the main measurement is called a layer image, and the multiple two-dimensional images (layer images) are collectively called Z-stack image data.
In the first embodiment, an example is described in which a color image is obtained using a single-chip image sensor. A three-chip method may be employed in which three image sensors corresponding to respective RGB colors are used to obtain a color image. Alternatively, a three-time image pickup method may be employed to capture a color image by using one image sensor and a three-color light source to capture an image three times while the color of the light source is switched from one to another.
The controller 1701 accesses, for example, the main memory 1702 and the sub-memory 1703 when necessary, and has overall control of the entire blocks in the PC while performing various computation processes. The main memory 1702 and the sub-memory 1703 are constituted by RAMs. The main memory 1702 is used as, for example, a work area for the controller 1701, and temporarily stores the OS, various programs that are being executed, and various types of data to be processed for, for example, generation of display data. In addition, the main memory 1702 and the sub-memory 1703 are also used as a storage area for image data. The direct memory access (DMA) function of the controller 1701 achieves fast transfer of image data between the main memory 1702 and the sub-memory 1703, and between the sub-memory 1703 and the graphics board 1704. The graphics board 1704 outputs an image processing result to the display apparatus 103. The display apparatus 103 is a display device using, for example, liquid crystal or electro-luminescence (EL). In the configuration, it is assumed that the display apparatus 103 is connected as an external apparatus. Alternatively, it may be assumed that a display apparatus is incorporated in a PC. This configuration corresponds to, for example, a notebook PC.
To the input/output I/F 1713, a data server 1714 is connected via the LAN I/F 1706; a storage device 1708, via the storage device I/F 1707; the image pickup apparatus 101, via the external apparatus I/F 1709; and a keyboard 1711 and a mouse 1712, via the operation I/F 1710.
The storage device 1708 is an auxiliary storage device which records and reads out the OS executed by the controller 1701, and information permanently stored as firmware, such as programs and various parameters. The storage device 1708 is also used as a storage area for layer image data transmitted from the image pickup apparatus 101. A magnetic disk drive, such as a hard disk drive (HDD) or a solid state disk (SSD), or a semiconductor device using a flash memory is used as the storage device 1708.
It is assumed that a pointing device, such as the keyboard 1711 or the mouse 1712, is a device equipped with the operation I/F 1710. A configuration may be employed in which the screen of the display apparatus 103 serves as a direct input device, e.g., a touch panel. In this case, the touch panel may be integrated with the display apparatus 103.
The user-input-information acquisition unit 1801 obtains, through the operation I/F 1710, instructions that are input by a user through the keyboard 1711 or the mouse 1712, such as a start and an end of image display, and scrolling, zooming-in, and zooming-out of a displayed image.
The image acquisition controller 1802 controls an image data area that is read out from the storage device 1708 and that is developed onto the main memory 1702 on the basis of the user input information. The image acquisition controller 1802 determines an image area which is expected to be required as a display image, with respect to the various types of user input information, such as a start and an end of image display, and scrolling, zooming-in, and zooming-out of a displayed image. When the main memory 1702 does not store the image area, the image acquisition controller 1802 instructs the layer-image acquisition unit 1803 to read out layer images in the image area from the storage device 1708 and to develop it onto the main memory 1702. Since readout from the storage device 1708 takes time, it is desirable to make the image area to be read out, as broad as possible so as to reduce overhead for the readout process.
The layer-image acquisition unit 1803 reads out the layer images in the image area from the storage device 1708 and stores them in the main memory 1702 in accordance with the control from the image acquisition controller 1802.
The display generation controller 1804 controls an image area to be read out from the main memory 1702 on the basis of the user input information, a method for processing the image area, and a display-image area to be transferred to the graphics board 1704. The display generation controller 1804 detects a display candidate image area that is expected to be required as a display image, and a display-image area and a target area that are actually displayed on the display apparatus 103, on the basis of the various types of user input information, such as a start and an end of image display, and scrolling, zooming-in, and zooming-out of a displayed image. When the sub-memory 1703 does not store the candidate image area, the display generation controller 1804 instructs the display-candidate-image acquisition unit 1805 to read out the display candidate image area from the main memory 1702. At the same time, the display generation controller 1804 transmits an instruction about how to process a scroll request, to the display-candidate-image generation unit 1806. In addition, the display generation controller 1804 instructs the display-image transfer unit 1807 to read out the display-image area from the sub-memory 1703. The readout of image data from the main memory 1702 is performed faster than the readout from the storage device 1708. Accordingly, the above-described display candidate image area is narrower than the broad image area obtained by the image acquisition controller 1802.
The display-candidate-image acquisition unit 1805 reads out the image areas of the layer images, which are display candidates, from the main memory 1702, and transfers them to the display-candidate-image generation unit 1806, in accordance with the control instruction from the display generation controller 1804.
The display-candidate-image generation unit 1806 expands the display candidate image data (layer image data) which is compressed image data, detects pieces of in-focus information for the target areas to be displayed on the display apparatus 103, assigns priorities to them, and develops the obtained information onto the sub-memory 1703.
The display-image transfer unit 1807 reads out the display images from the sub-memory 1703, and transfers them to the graphics board 1704, in accordance with the control instruction from the display generation controller 1804. The display-image transfer unit 1807 performs fast image data transfer between the sub-memory 1703 and the graphics board 1704 by using the DMA function.
An in-focus information detection unit 1902 detects an image contrast, which is the in-focus information, for each of the target areas in the layer images to be displayed on the display apparatus 103. The process flow of detection of the in-focus information, and the in-focus information will be described with reference to
A priority-assigning unit 1903 assigns priorities to the target areas on the basis of the in-focus information detected by the in-focus information detection unit 1902, and stores the in-focus information, the priority information, and the layer images onto the main memory 1702. The priority to the target areas and the process flow of assigning priorities will be described with reference to
In
The operations of the image processing apparatus 102 according to the first embodiment will be described with reference to
In step S402, target areas are extracted in multiple layer images having different focal positions. An example of the extraction of target areas will be described with reference to
In step S403, in-focus information is detected. An image contrast which is in-focus information is detected for each of the target areas extracted in step S402. An in-focus area (in-focus image among the target areas) is specified through the detection of in-focus information. The process flow of the detection of in-focus information, and the in-focus information will be described with reference to
In step S404, the multiple target areas obtained at different focal positions are stored. The multiple target areas extracted in step S402 are used for a user to perform detailed observation in the depth direction (Z direction). Therefore, the target areas are highly likely to be displayed at once. Accordingly, to perform smooth rendering on the display apparatus 103, the multiple target areas are temporarily stored in a display memory.
In step S405, auxiliary areas are displayed. An image selected by a user as a target area (target area in the observation image; observation area), and the target areas (auxiliary areas) in the layer images whose focal positions are before and after that of the observation area are displayed. In addition, information based on the results of the detection of in-focus information performed in step S403 is also displayed. An exemplary auxiliary-area presentation screen including auxiliary areas will be described with reference to
Through the above-described processing steps, the target area and its auxiliary areas are displayed. The display of auxiliary areas enables a user to easily perform detailed observation in the depth direction (Z direction) and selection of the in-focus area.
In step S501, any target area is selected from the multiple target areas extracted in step S402.
In step S502, the target area selected in step S501 is obtained.
In step S503, an image contrast of the target area obtained in step S502 is detected. An image contrast can be calculated using the following expression, where E represents an image contrast and L(m, n) represents a luminance component of a pixel. Here, m represents a pixel position in the Y direction, and n represents a pixel position in the X direction.
E=Σ(L(m,n+1)−L(m,n))2+Σ(L(m+1,n)−L(m,n))2 [Math. 1]
The first term on the right hand side represents luminance differences between pixels adjacent to each other in the X direction, and the second term represents luminance differences between pixels adjacent to each other in the Y direction. An image contrast E is an index indicating squared-sums of luminance differences between pixels adjacent to each other in the X direction and in the Y direction. In
In step S504, it is determined whether or not in-focus information (image contrast) has been detected for all of the target areas. If a target area whose image contrast has not been detected is present, the process proceeds to step S505, and a target area that has not been processed is selected as a target area to be processed next. If it is determined, in step S504, that the detection of an image contrast is completed for all of the target areas, the process ends.
In step S505, a target area that has not been processed is selected as a target area to be processed next, and the process proceeds to step S502.
In the above description, an example is described in which a squared-sum of luminance differences is used as an image contrast. However, a method for obtaining an image contrast is not limited to this. In another exemplary method for obtaining an image contrast, a discrete cosine transform is performed to obtain frequency components, and a total sum of high-frequency components among the frequency components is obtained. Alternatively, edge detection is performed using an edge detection filter, and the obtained edge components may be used as the degree of contrast. Instead, the maximum and the minimum of luminance values are detected, and the difference between the maximum and the minimum may be used as the degree of contrast. Other than these, various existing methods may be applied to the contrast detection.
As described above, an image contrast can be detected for all of the target areas.
The above-described multiple target areas and their in-focus information are used to specify the in-focus area and to display auxiliary areas.
By using the target-area specification screen and the auxiliary-area presentation screen described above, a user can easily perform detailed observation in the depth direction (Z direction) and selection of the in-focus area.
Here, two auxiliary areas obtained from two layer images whose focal positions are before and after that of the observation area are displayed at the same time. Alternatively, either one of the areas, e.g., the in-focus area, may be displayed at the same time.
A target area (observation area) 801 is an area in the observation image selected by the user. Auxiliary areas 802 and 803 are target areas (corresponding areas) in layer images whose focal positions are different from that of the observation area 801. The auxiliary areas 802 and 803 are located in such a manner that the focal positions of the auxiliary areas 802 and 803 are before and after that of the observation area 801. In the depth direction (Z direction) in
Each of
In step S901, any target area is selected from the multiple target areas extracted in step S402.
In step S902, the target area selected in step S901 is obtained.
In step S903, the structure in the target area obtained in step S902 is inferred. The structure indicates, for example, a cell nucleus. In the case where the subject (sample) is a hematoxylin-eosin (HE) stained sample, a cell nucleus is stained in dark bluish purple by hematoxylin. On the basis of this color information or information describing that the form is approximately a circle, the structure of a cell nucleus is inferred. Machine learning such as a support vector machine (SVM) is also used, enabling the structure to be efficiently inferred.
In step S904, it is determined whether or not the structure is inferred for all of the target areas. If a target area which has not been subjected to the structure inference is present, the process proceeds to step S905, and a target area which has not been processed is selected as a target area to be processed next. If it is determined, in step S904, that the structure inference is completed for all of the target areas, the process proceeds to step S906.
In step S905, a target area which has not been processed is selected as a target area to be processed next, and the process proceeds to step S902.
In step S906, structures to be detected are set. As illustrated in
In step S907, structure contrasts in the target area obtained in step S502 are detected. Image contrasts described in
As described above, structure contrasts can be detected for all of the target areas. By using structure contrasts, the contrast of a structure which is a focus point in a target area can be detected, and more accurate in-focus state can be grasped for an observation object.
In observation of a HE stained sample, an observer observes all over the entire area of an image. Accordingly, it is desirable that the entire image be focused in terms of ease of observation. Therefore, it is desirable to use an image contrast as in-focus information. On the other hand, for an immuno-histochemical staining (IHC) sample, target areas are often limited and the object of observation is clear, such as counting of cancerous nucleuses. In counting of cancerous nucleuses, the in-focus state of the nucleuses is important. Accordingly, it is desirable that structures in a target area be focused in terms of ease of observation. Therefore, a structure contrast is desirably used as in-focus information.
As described above, the expansion of types of a contrast for an image enables a method for detecting in-focus information to be easily selected.
There are multiple target areas having the priority “1”. Here, the target areas are stored in the following procedure. A target area TA1 which is located at the middle between the area of the priority “3” and the area of the priority “2” is determined, and is stored on the display memory. Then, a target area TA2 which is located at the middle between the area of the priority “3” and the target area TA1 is determined, and is stored on the display memory. Then, a target area TA3 which is located at the middle between the area of the priority “2” and the target area TA1 is determined, and is stored on the display memory. After that, a similar procedure is repeated, and the order of storage of the target areas having the priority “1” onto the display memory is determined. In the case where a target area is not present at the middle, that is, in the case where two target areas are present at the middle, a rule may be defined, such as a rule that a target area located at a deeper position is to be selected. This is a method for determining the priority by repeating a process of equal division. There are multiple target areas having the priority “0”. For such target areas, a method is employed in which a target area closer to the area of the priority “3” and a target area closer to the area of the priority “2” are alternately stored on the display memory.
As described above, a priority is assigned to a target area from a viewpoint of whether a user highly likely observes the target area, and target areas are stored on the display memory in accordance with their priorities, enabling the target areas to be quickly displayed with ease of operation.
In step S1201, a priority is assigned to the observation area. Referring to
In step S1202, a priority is assigned to the in-focus area. Referring to
In step S1203, a priority is assigned to target areas that are located between the observation area and the in-focus area. Referring to
In step S1204, a priority is assigned to target areas that are not located between the observation area and the in-focus area. Referring to
By assigning priorities to target areas according to the above-described flow, target areas which are highly likely to be observed by a user are stored on the display memory in descending order of priority, enabling target areas to be quickly displayed with ease of operation.
According to the first embodiment described above, the depth position of a subject (sample) in the in-focus state can be easily grasped in observation of the subject (sample) using digital images. This enables detailed observation in the depth direction for the subject (sample) to be easily performed.
Further, even when the memory capacity is limited, display responsivity and display operability are improved. This enables detailed observation in the depth direction for a subject (sample) to be performed without any stress.
The above-described embodiment is described under the assumption that the embodiment is used mainly in histological diagnosis in which the structure of a tissue is observed in a section. In histological diagnosis, the thickness of a sample is as thin as several micrometers, and Z-stack image data is used to deal with blurring of an image due to an influence of the unevenness of the sample surface or optical aberration. Therefore, an observer is basically interested in the in-focus area, and uses the Z-stack image data around the in-focus area in an auxiliary manner In contrast, in cytological diagnosis, the thickness of a sample is as thick as several tens to several hundreds of micrometers, and the three-dimensional structure of a cell or a cell clump is observed. The Z-stack image data is used in cytological diagnosis in terms of grasping of a three-dimensional structure. Therefore, in cytological diagnosis, a display method in which a three-dimensional structure is easily grasped is important. A display method in which a three-dimensional structure can be easily grasped in cytological diagnosis will be described below.
A display Z-stack number determination unit 2001 determines the number of Z stacks to be displayed on the basis of the in-focus information detected by the in-focus information detection unit 1902. The number of Z stacks to be displayed is the number of areas constituted by a target area (observation area) in the observation image and auxiliary areas. The method for determining the number of Z stacks will be described with reference to
Five screens constituted by one screen for the target area (observation area) and four screens for the auxiliary areas are displayed. The target area (observation area) is displayed in the foreground, and auxiliary areas are also displayed at the same time. In addition, a prime consideration is given to grasping of the entire three-dimensional structure, and the Z-stack image data for the target area (observation area) and the auxiliary areas is displayed in such a manner that their focal positions are apart from each other at equal intervals. This enables detailed observation of the target area (observation area). By concurrently using multiple auxiliary areas, a display method is achieved in which the entire three-dimensional structure is easily grasped.
By using the auxiliary-area presentation screen described above, a user can easily perform detailed observation in the depth direction (Z direction), and can easily grasp the three-dimensional structure of a cell clump.
As described above, also upon switching of the target area (observation area), the target area in the observation image and target areas (corresponding areas) in Z-stack image data whose focal positions are different from that of the observation area are constantly displayed, achieving easy grasping of the three-dimensional structure of a cell clump.
According to the embodiments described above, the three-dimensional structure of a subject (sample) can be easily grasped in observation of the subject (sample) using digital images.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2011-286779, filed Dec. 27, 2011, and Japanese Patent Application No. 2012-226899, filed Oct. 12, 2012, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2011-286779 | Dec 2011 | JP | national |
2012-226899 | Oct 2012 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/008024 | 12/14/2012 | WO | 00 | 6/26/2014 |