1. Field of the Invention
The present invention relates to an imaging apparatus, and more particularly to an imaging apparatus which divides and images an area using a plurality of image sensors which are discretely arranged.
2. Description of the Related Art
In the field of pathology, a virtual slide apparatus is available, where a sample placed on a slide is imaged, and the image is digitized so as to make possible a pathological diagnosis based on a display. This is used instead of an optical microscope, which is another tool used for pathological diagnosis. By digitizing an image for pathological diagnosis using a virtual slide apparatus, a conventional optical microscope image of the sample can be handled as digital data. The expected merits of this are: a quick remote diagnosis, a description of a diagnosis for a patient using digital images, a sharing of rare cases, and making education and practical training efficient.
In order to digitize the operation with an optical microscope using the virtual slide apparatus, the entire sample on the slide must be digitized. By digitizing the entire sample, the digital data created by the virtual slide apparatus can be observed by viewer software, which runs on a PC and WS. If the entire sample is digitized, however an enormous number of pixels are required, normally several hundred million to several billion. Therefore in a virtual slide apparatus, an area of a sample is divided into a plurality of areas, and is imaged using a two-dimensional image sensor having several hundred thousand to several million pixels, or using a one-dimensional image sensor having several thousand pixels. To implement divided imaging, it is necessary to tile (merge) a plurality of divided images so as to generate an entire image of the test sample.
The tiling method using one two-dimensional image sensor captures images of a test sample for a plurality of times while moving the two-dimensional image sensor relative to the test sample, and acquires the entire image of the test sample by pasting the plurality of captured images together without openings. A problem of the tiling method using a single two-dimensional image sensor is that it takes more time in capturing images as a number of divided areas increases in the sample.
As a technology to solve this problem, the following technology has been proposed (see Japanese Patent Application Laid-Open No. 2009-003016). Japanese Patent Application Laid-Open No. 2009-003016 discloses a technology which includes a microscope having an image sensor group formed of a plurality of two-dimensional image sensors disposed within the field of view of an objective lens, and images an entire screen by capturing the images a plurality of number of times while relatively changing the positions of the image sensor group and the position of the sample.
In the microscope disclosed in Japanese Patent Application Laid-Open No. 2009-003016, the plurality of two-dimensional image sensors are equally spaced. In the case of the imaging area on the object plane being projected onto the imaging plane of the image sensor group without distortion, image data can be efficiently generated by equally spacing the two-dimensional image sensors. In reality however, the imaging area on the imaging plane is distorted as shown in
With the foregoing in view, it is an object of the present invention to provide a configuration to divide an area and to image the divided areas using a plurality of image sensors which are discretely disposed, so as to efficiently obtain the image data of each of the divided areas.
The present invention in its first aspect provides an imaging apparatus which, with an imaging target area of an object being divided into a plurality of areas, images each of the divided areas using a two-dimensional image sensor, the apparatus including: a plurality of two-dimensional image sensors which are discretely disposed; an imaging optical system which enlarges an image of the object and forms an image thereof on an imaging plane of the plurality of two-dimensional image sensors; and a moving unit which moves the object in order to execute a plurality of times of imaging while changing the divided area to be imaged by each of the two-dimensional image sensors, wherein at least a part of the plurality of divided areas is deformed or displaced on the imaging plane due to aberration of the imaging optical system, and each position of the plurality of two-dimensional image sensors is adjusted according to a shape and position of the corresponding divided area on the imaging plane.
The present invention in its second aspect provides an imaging apparatus which, with an imaging target area of an object being divided into a plurality of areas, images each of the divided areas using a two-dimensional image sensor, including: a plurality of two-dimensional image sensors which are discretely disposed; an imaging optical system which enlarges an image of the object and forms an image thereof on an imaging plane of the plurality of two-dimensional image sensors; a moving unit which moves the object in order to execute a plurality of times of imaging while changing the divided area to be imaged by each of the two-dimensional image sensors, and a position adjustment unit which adjusts each position of the plurality of two-dimensional image sensors, wherein at least a part of the plurality of divided areas is deformed or displaced on the imaging plane due to aberration of the imaging optical system, and when aberration of the imaging optical system changes, the position adjustment unit changes a position of each of the two-dimensional image sensors according to the deformation or displacement of each divided area due to the aberration after change.
According to the present invention, a configuration, to divide an area and to image the divided areas using a plurality of image sensors which are discretely disposed, can be provided so as to efficiently obtain the image data of each of the divided areas.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
The light source 101 is a unit for generating an illumination light for imaging. For the light source 101, a light source having emission wavelengths of three colors, RGB, is used, such as a light source with a configuration of emitting light by electrically switching each monochromatic light using an LED, LD or the like, or a light source with a configuration of mechanically switching each monochromatic light using a white LED and color wheel. In this case, monochrome image sensors, which have no color filters, are used for the image sensor group of the imaging unit 105. The light source 101 and the imaging unit 105 operate synchronously. The light source 101 sequentially emits the lights of RGB, and the imaging unit 105 exposes and acquires each RGB image respectively, synchronizing with the emission timings of the light source 101. One captured image is generated from each RGB image by the development/correction unit 106 in the subsequent step. The illumination optical system 102 guides the light of the light source 101 efficiently to an imaging reference area 110a on the slide 103.
The slide (preparation) 103 is a supporting plate to support a sample to be a target of pathological diagnosis. And the slide 103 has a slide glass on which the sample is placed and a cover glass with which the sample is sealed using a mounting solution.
The imaging optical system 104 enlarges (magnifies) and guides transmitted light from the imaging reference area 110a on the slide 103, and forms an image of an imaging reference area image 110b, which is a real image of the imaging reference area 110a, on the imaging plane of the imaging unit 105. Due to the influence of an aberration of the imaging optical system 104, the imaging reference area image 110b has been deformed or displaced. Here it is assumed that the imaging reference area image has a form which was deformed into a barrel shape by the distortion. An effective field of view 112 of the imaging optical system 104 is a size which includes the image sensor group 111a to 111l and the imaging reference area image 110b.
The imaging unit 105 is an imaging unit constituted by a plurality of two-dimensional image sensors which are discretely arrayed two-dimensionally in the X direction and the Y direction, with spacing therebetween. In this embodiment, twelve two-dimensional image sensors 111a to 111l arranged in four columns and three rows are provided. These image sensors may be mounted on a same board or on separate boards. To distinguish an individual image sensor, an alphabetic character is attached to the reference number, that is, from a to d, sequentially from the left, in the first row, e to h in the second row, and i to l in the third row, but for simplification, image sensors are denoted as “111a to 111l” in the drawing. This is the same for the other drawings.
Since the positional relationship between the image sensor group 111a to 111l and the effective field of view 112 of the imaging optical system is fixed, the positional relationship between the deformed shape of the imaging reference area image 110b on the imaging plane, with respect to the image sensor group 111a to 111l, is also fixed. The positional relationship between the imaging reference area 110a and the imaging target area 501, in the case of imaging the entire area of the imaging target area 501 while moving the imaging target area 501 using the moving mechanism 113 (XY stage) disposed on the slide side, will be described later with reference to
The development/correction unit 106 performs the development processing and the correction processing of the digital data acquired by the imaging unit 105. The functions thereof include black level correction, DNR (Digital Noise Reduction), pixel defect correction, brightness correction due to individual dispersion of image sensors and shading, development processing, white balance processing, enhancement processing, and correction of distortion and chromatic aberration of magnification. The merging unit 107 performs processing to merge a plurality of captured images (divided images). Images to be connected are images that are produced after the development/correction unit 106 corrects the distortion and the chromatic aberration of magnification.
The compression unit 108 performs sequential compression processing for each block image which is output from the merging unit 107. The transmission unit 109 outputs the signals of the compressed block image to a PC (Personal Computer) and WS (Workstation). For the signal transmission to a PC and WS, it is preferable to use a communication standard which allows large capacity transmission, such as gigabit Ethernet.
In a PC and WS, each received compressed block image is sequentially stored in a storage. To read a captured image of a sample, viewer software is used. The viewer software reads the compressed block image in the read area, and decompresses and displays the image on a display. By this configuration, a high resolution large screen image can be captured from about a 15 mm×10 mm sample, and the acquired image can be displayed.
Here a configuration of sequentially emitting monochromatic light with the light source 101 to image the object using the monochrome image sensor group 111a to 111l was described, but the light source may be a white LED and the image sensors may be image sensors with color filters.
(Configuration of Image Sensor)
(Aberration of Imaging Optical System)
In the imaging optical system 104, various aberrations, such as distortion and chromatic aberration of magnification, can be generated due to the shapes and optical characteristics of the lenses. The phenomena of an image being deformed or displaced due to the aberrations of the imaging optical system will be described with reference to
(Arrangement of Image Sensors)
The object plane wire frame 301 on the object plane (on the slide) is observed on the imaging plane (on the effective image area of the two-dimensional image sensors) as the imaging plane wire frame 302, that is deformed to be barrel-shaped due to the influence of distortion. The oblique line areas on the object plane indicate divided areas imaged by the two-dimensional image sensors respectively. The divided areas on the object plane are equally spaced rectangles which have a same size, but on the imaging plane where the image sensor group is disposed, divided areas having deformed shapes are unequally spaced. Normally the test sample on the object plane is formed on the imaging plane as an inverted image, but in order to clearly show the correspondence of the divided areas, the object plane and the imaging plane are illustrated as if they were in an erecting relationship.
Now each position of the effective image areas 201a to 2011 of the two-dimensional image sensors is adjusted according to the shape and position of the corresponding divided area (divided area to be imaged) on the imaging plane. In concrete terms, each position of the two-dimensional image sensors is determined so that each projection center 401a to 401l, which is a center of each effective image area 201a to 201l of the two-dimensional image sensor being projected on the object plane, matches with the center of the corresponding divided area on the object plane. In other words, as shown in
The size of the two-dimensional image sensor (size of each effective image area 201a to 201l) is determined such that at least the effective image area includes the corresponding divided area. The sizes of the two-dimensional image sensors in this case may be the same or different from one another. In the present invention, the latter configuration is used, that is the sizes of the effective image area of individual two-dimensional image sensors are different according to the size of the corresponding divided area on the imaging plane. Since the shapes of the divided areas are distorted on the imaging plane, the size of the circumscribed rectangle of the divided area is defined as the size of the divided area. In concrete terms, the size of the effective image area of each two-dimensional image sensor is set to be the same size as the circumscribed rectangle of the divided area on the imaging plane, or a size of the circumscribed rectangle around which a predetermined width of a margin required for merging processing is added.
The arrangement of each of the two-dimensional image sensors should be performed during adjustment of the product at the factory, for example, while calculating the center of the arrangement of the two-dimensional image sensor and the size of the two-dimensional image sensor in advance, based on the design values or measured values of distortion.
By adjusting the arrangement and the size of each of the two-dimensional image sensors considering aberrations of the imaging optical system, as described above, the effective image area of the two-dimensional image sensor can be efficiently used. As a result, image data required for merging images can be obtained using smaller sized two-dimensional image sensors compared with the prior art (FIG. 11). Since the size and the spacing of the divided areas on the object plane are uniform, and based on this, the arrangement of the two-dimensional image sensors on the imaging plane side is adjusted, feed control of the object in the divided imaging can involve simple equidistant moving. The divided imaging procedure will now be described.
(Procedure of Divided Imaging)
(1) to (4) of
(1) of
In order to perform the merging processing in a post-stage by a simple sequence, it is assumed that a number of reading pixels in the Y direction is approximately the same for all the divided areas which exist side by side in the X direction on the object plane. For the merging unit 107 to perform merging processing, an overlapped area (margin) is required between adjacent image sensors, but an overlapped area is omitted here to simplify description.
As described above, the entire imaging target area can be imaged without opening by performing imaging processing four times (moving mechanism moves the slide three times) using the image sensor group.
(Imaging Processing)
In step S601, the imaging area is set. In the present embodiment, the imaging target area of a 15 mm×10 mm size is set according to the location of the test sample on the slide. The location of the test sample may be specified by the user, or may be determined automatically based on the result of measuring or imaging the slide in advance.
In step S602, the slide is moved to the initial position where the first imaging (N=1) is executed. In the case of
In step S603, Nth imaging is executed within an angle of view of the lens. The image data obtained by each image sensor is sent to the development/correction unit 106 where necessary processing is performed, and is then used for merging processing in the merging unit 107. As
In step S604, it is determined whether imaging of the entire imaging target area is completed. If the imaging of the entire imaging target area is not completed, processing advances to S605. If the imaging of the entire imaging target area is completed, that is, if N=4 in the case of this embodiment, the processing ends.
In step S605, the moving mechanism moves the slide in order to obtain a position for executing imaging for the Nth time (N≧2). In the case of
In step S606, emission of a monochromatic light source (R light source, G light source or B light source) and the exposure of the image sensor group are started. The lighting timing of the monochromatic light source and the exposure timing of the image sensor group are controlled to synchronize during operation.
In step S607, the single monochromatic signal (R image signal G image signal or B image signal) is read from each image sensor.
In step S608, it is determined whether imaging of all of the RGB images is completed. If imaging of each image of RGB is not completed, processing returns to S606, and processing ends if completed.
According to these processing steps, the entire imaging target area is imaged by imaging each image of RGB four times respectively.
According to the configuration of the present embodiment described above, the arrangement and size of each two-dimensional image sensor are adjusted considering an aberration of the imaging optical system, hence image data required for image merging can be obtained using small sized two-dimensional image sensors compared with prior art. As a result, obtaining unnecessary data (data on an area unnecessary for image merging) can be minimized, hence the data volume is decreased, and the data transmission and image processing can be more efficient.
As a method for obtaining image data efficiently, a method of changing the pixel structure (shapes and arrangement of pixels) itself on the two-dimensional image sensor according to the distorted shape of the divided area can be used, besides the method of the present embodiment. This method, however, is impractical to implement, since design cost and manufacturing cost are high, and flexibility is poor. An advantage of the case of the method of the present embodiment, on the other hand, is that an unaltered general purpose two-dimensional image sensor, where identical shaped pixels are equally spaced as shown in
A second embodiment of the present invention will now be described. The first embodiment described that it is preferable to change the size of the effective imaging area of each two-dimensional image sensor according to the shape of the individual divided area, in terms of efficient use of the effective image area. Whereas in the present embodiment, a configuration of using two-dimensional image sensors under the same specifications will be described to simplify the configuration, reduce cost and improve maintenance.
In the description of the present embodiment, detailed description on the portions the same as the above mentioned first embodiment is omitted. The general configuration of the imaging apparatus shown in
(Arrangement of Image Sensors)
Now just like the first embodiment, each position of the two-dimensional image sensors is determined so that each projection center 401a to 401l, which is a center of each effective image area 201a to 201l of the two-dimensional sensor projected on the object plane, matches with the center of the corresponding divided area on the object plane. A difference from the first embodiment (
(Data Read Method)
In the case of the method of
The random read addresses in
Actually an overlapped area (margin) is required between the images of adjacent divided areas for the merging unit 107 to perform merging processing (connecting processing). Therefore each two-dimensional image sensor reads (extracts) data on an area having the size including this overlapped area. The overlapped area, however, is omitted here to simplify description.
(Handling of Chromatic Aberration of Magnification)
Distortion has been described thus far, and now chromatic aberration of magnification will be described with reference to
As described in
In step S801, a random read address or ROI is set again for each color of each image sensor. Here a read area of each image sensor is determined. The control unit holds a random read address or ROI for each color of RGB in advance, so as to correspond to the chromatic aberration of magnification described in
In step S802, emission of a monochromatic light source (R light source, G light source or B light source) and exposure of the image sensor group are started. The lighting timing of the monochromatic light source and the exposure timing of the image sensor group are controlled to synchronize during operation.
In step S803, a monochromatic image signal (R image signal, G image signal or B image signal) is read from each image sensor. At this time, only the image data on a necessary area is read according to the random read address or ROI, which was set in step S801.
In step S804, it is determined whether imaging of all the RGB images is completed. If imaging of each image of RGB is not completed, processing returns to S801, and processing ends if completed.
According to these processing steps, image data, in which the shift of position and size due to chromatic aberration of magnification for each color has been corrected, can be obtained efficiently.
(Configuration for Data Read Control)
Considering random reading and ROI control of the two-dimensional image sensors, the distortion data of the objective lens is stored in the aberration data storage unit 903 in advance. The distortion data need not be data to indicate distorted forms, but can be position data for performing random reading or ROI control, or data that can be converted into the position data. The imaging signal control unit 902 receives objective lens information for the CPU 904, and reads the corresponding distortion data of the objective lens from the aberration data storage unit 903. Then the imaging signal control unit 902 drives the imaging control units 901a to 9011 based on this distortion data which was read.
The chromatic aberration of magnification data is stored in the aberration data storage unit 903 in order to handle the chromatic aberration of magnification described in
Because of the above mentioned configuration, image data can be efficiently generated by performing random reading and ROI control of the two-dimensional image sensors, even in the case of using two-dimensional image sensors having a same size effective image area. According to the configuration of the present embodiment, two-dimensional image sensors and imaging control units having same specifications can be used, hence the configuration can be simplified, cost can be reduced, and maintenance can be improved. In the configuration of the present embodiment, only necessary data is read from the image sensors, but all the data may be read from the image sensors, just like the first embodiment, and necessary data may be extracted in a post-stage (development/correction unit 106).
In the above embodiments, distortion was considered as a static and fixed value, but in the third embodiment, distortion which changes dynamically will be considered.
If the magnification of the objective lens of the imaging optical system 104 is changed or if the objective lens itself is replaced with a new lens, for example, aberration changes due to the change of the lens shape or optical characteristics, and the shape and position of each divided area on the imaging plane change accordingly. It is also possible that the aberration of the imaging optical system 104 changes during use of the imaging apparatus, due to the change of environmental temperature and heat of the illumination light. Therefore it is preferable that a sensor to detect the change of magnification of the imaging optical system 104 or replacement of the lens, or a sensor to measure the temperature of the imaging optical system 104 is installed, so as to adaptively handle the change of the aberration based on the detection result.
In concrete terms, in the configuration shown in
An example of the configuration to mechanically rearrange each image sensor according to the change of magnification or replacement of the objective lens will be described with reference to
Distortion data for each magnification of the objective lens and for each type of the objective lens are stored in the aberration data storage unit 1003. The distortion data need not be data to indicate the distorted forms, but can be position data for driving the XYθ stage or data that can be converted into the position data. The lens detection unit 1005 detects the change of the objective lens, and notifies the change to the CPU 1004. Receiving the signal notifying the change of the objective lens from the CPU 1004, the XYθ stage control unit 1002 reads the corresponding distortion data of the objective lens from the aberration data storage unit 1003. Then the XYθ stage control unit 1002 drives the XYθ stages 1001a to 1001l based on this distortion data which was read.
According to the configuration of the present embodiment described above, image data required for image merging can be generated efficiently, just like the first and second embodiments. In addition, when the objective lens is changed, the change of distortion caused by changing magnification or replacing lenses, can be handled by adaptively changing the arrangement of the two-dimensional image sensors. Since the two-dimensional image sensors, which have approximately the same size effective image area, are used as an image sensor group, a same mechanism can be used for the moving control mechanism of each two-dimensional image sensor, and the configuration can be simplified, and cost can be reduced.
In order to handle the change of aberrations depending on temperature, a temperature sensor for measuring the temperature of the lens barrel of the imaging optical system 104 may be disposed in the configuration in
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2010-273386, filed on Dec. 8, 2010 and Japanese Patent Application No. 2011-183091, filed on Aug. 24, 2011, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2010-273386 | Dec 2010 | JP | national |
2011-183091 | Aug 2011 | JP | national |