The application relates to an autofocus method, in which light from a light source is focused at a measurement light focus in a sample and is reflected from there and the reflected light is guided through an optical system and at least one aperture in two light paths onto at least two detector elements.
Two methods are known for automatically focusing microscopes at a sample:
In microscopy, a sample is normally made up of a sample material to be analyzed, which is applied to a transparent specimen slide and is covered with a thin transparent cover slip. A position measurement of the sample material frequently leads to measuring the position of one of the reflection planes on the layer interface of the sample. Since a reflection at the air/cover slip interface layer is much stronger than a reflection at an interface layer on the sample material, the air/cover slip reflection typically outshines the reflection at an interface layer on the sample material, which is more suitable for an autofocus.
Measuring the position of a strong reflecting layer above or below the sample and drawing a conclusion about the position of the sample material from the thickness of the sample, which is arranged at a known distance from the reflecting layer is known from U.S. Pat. No. 6,130,745. Typically, however, when using high-resolution systems in the case of the described sample, the tolerances in the layer thickness (e.g., of the cover slip or of specimen slide) are greater than the depth of field of the imaging system and a focusing cannot always be guaranteed with such a method.
The object of the invention is disclosing an autofocus method with which an optical system, e.g., a microscope, can be adjusted in terms of focus quickly and precisely on a reflecting layer of a sample.
This object is attained by a method of the type cited at the outset, in which, according to the invention, the measurement light focus is moved in layers of the sample which reflect light to different extents and the detector elements are arranged in such a way that, in this case, profiles of a radiation property registered by the detector elements are different from each other and a focus position is set in a manner dependent on the profiles. Due to the different profiles of the registered radiation profiles, the position of an especially distinguished layer in the sample, e.g., a reflecting interface, may be found and focused thereon or on a target focus plane arranged at a known distance therefrom.
Surfaces, e.g., interfaces, are also to be considered layers in the following. One of the layers is advantageously an interface. The light paths are expediently separated from one another at least partially, in particular they are separated from one another in the optical system. The separation is accomplished expediently by a shaded area between the light paths. The shading can be produced by a diaphragm.
The profile may be detected by punctiform measurements at several positions of the measurement light focus, separated expediently by light paths. The radiation property of the reflected light may be the radiation intensity. The adjusted focus position is a desired focus position in which the optical system is arranged expediently in relation to the sample in such a way that an image acquisition of the sample produces the desired images.
In addition, because of the invention, the optical path length of the light paths can be specified. The optical path length in this case may be measured from the sample to a diaphragm in front of a detector. The optical path lengths of light paths are expediently selected to be different. A deviation signal to a selected reflecting/scattering sample structure can be generated hereby through the separate analysis of the optical path lengths of the light paths and focused thereon or on a target focus plane arranged at a known distance therefrom. A layer reflecting light may be a reflecting and/or scattering sample structure and may be in particular an interface layer, especially an interface layer or interface adjacent to the sample material.
The autofocus method is a method for automatically focusing the optical system on a desired focus position or target focus plane, e.g., within the sample. If focusing is on the target focus plane, the optical system can form a sharp image of an object arranged there, in an image plane in which a detector or a camera is expediently arranged. After the autofocusing, the sample may be depicted with the aid of a camera.
The sample may be sample material prepared for examination, a slide to which it is applied, and a cover slip, which covers it. A layer structure that is transparent for the autofocus light, on whose layer interfaces a reflection or scattering of the autofocus light is incident, is likewise suitable. The sample does not have to have any transparency for the autofocus light after the layer intended for focusing. The reflection/scattering at an interface layer, which is described here, may also be caused by the reflecting/scattering particle layer or defective layer in the material. The layer interfaces may be pretreated (e.g., mirrored) in order to increase the signals for the autofocus system.
The target focus plane is that plane within the sample on which the optical system is supposed to focus or from which the position of the desired focus is supposed to be removed at a predetermined distance. The target focus plane is expediently a reflection plane on which incident light is reflected. It may be an interface layer within the sample, e.g., the plane of a glass/sample material interface. Similarly, the scattering on the sample itself could be utilized.
The light guided to the light paths expediently originates from a common light source, wherein not just the originally radiating material but also a reflective layer, an aperture or the like is designated as the light source. A one-dimensional or two-dimensional light source is typically used. The two light paths are expediently formed symmetrically to each other and are arranged in particular symmetrically to the optical axis of the optical system.
The light source, which in particular is punctiform or linear or strip-shaped or is made up of several points of light, is focused at the measurement light focus in the sample by the optics. It may be depicted in this way in the sample. The measurement light focus is normally punctiform, but depending on the shape of the light source may alternatively be one-dimensional or two-dimensional and e.g., include several points. The measurement light focus is situated advantageously in the focus or near to the focus of the optical system being focused. The focus of the optical system may be a focus plane. The optical system forms a sharp image in an image plane of an object lying in the focus of the optical system. It is also possible for the measurement light focus to be located at a preset distance from the focus of the optical system. Through this, the measurement light focus may be adjusted on a reflection plane, e.g., an interface layer of the cover slip/sample material, wherein the focus of the optical system e.g., is located 20 μm away from the interface layer in the sample material.
A portion of the light incident on the sample is reflected. In the following, a reflection and/or a scattering may be understood as “reflected.” The layer reflecting the light may be a reflecting and/or scattering layer. When we speak of reflection in the following, a scattering is meant to be included therein.
The two light paths are expediently guided symmetrically around the optical axis of the optical system. They advantageously strike the sample in different directions so that their reflections are radiated in different directions and thus may be easily analyzed separate from one another. It facilitates the detection of the individual layers in a layer structure, if the angle of the incident light paths is selected in such a way that reflections of adjacent layers do not overlap one another. If a scattering layer is used to determine the focus position, then the separation of the light paths should first occur in the detection path.
The light of the autofocus system advantageously has a different frequency than light, which may be used to examine or form an image of the sample. The light property is expediently the light intensity.
The optical system may be that of a microscope. It has an optical axis, which is normally aligned perpendicularly to a sample plane in which the sample extends.
The light paths between the light source and the reflection layer or between the reflection layer and the detector may be designed as illumination paths or detection paths. An autofocus light path therefore is made up of an illumination path and a detection path. The difference in the optical path length may now be generated in both the illumination path and the detection path as well as in both (sic) paths. A realization of the detection path is described in the following.
The measurement of the optical path length of the paths takes place by at least one, in particular respectively a diaphragm in front of the detectors. Because of a wavelength-dependent position of the light paths at the diaphragm, it is possible to draw conclusions about the optical path length of the system. A possible realization is described in the following:
The detector elements are arranged in such a way that e.g., relative to an element of the optical system, e.g., relative to a diaphragm, profiles of a radiation property registered by the detector elements are different from one another. The element of the optical system may be a diaphragm, e.g., directly in front of the detector elements, a beam splitter, a mirror or another suitable element.
If light from a light path is reflected on two stacked layers of the sample, then the light path or the optical path length of the light from the one layer, e.g., to the detector or a diaphragm in front of the detector, is longer than the light path or the optical path length from the other layer. Because of this, the two light paths from the two layers, e.g., to the detector elements, may be different. The light paths expediently run so that they are blocked in different ways at a diaphragm in front of the detector elements, e.g., if the one light path is blocked completely or partially and the other is blocked partially or not at all. In this way, it is possible for the light paths to be detected individually and without another light path from another layer.
The main reflection from a glass/air interface layer above the sample is expediently masked by a diaphragm in front of the detector elements while the measurement light focus is moved through layers underneath which reflect light to different extents. Through this, light from these layers may pass through the diaphragm. In this way, layers that reflect considerably weaker than the glass/air interface layer may be detected.
The diaphragm or its aperture is advantageously arranged in an image plane of the optical system, i.e., in a plane in which an object focused by the optical system is projected. The aperture may be an image of the light source.
The light which is reflected from the measurement light focus is expediently projected in the plane of the diaphragm in accordance with the shape of the measurement light focus. The diaphragm is advantageously arranged such that it allows light from both light paths reflected in the measurement light focus to pass through, in particular to an equal extent. Through this the diaphragm expediently blocks light, which was reflected above and below the measurement light focus completely or asymmetrically with respect to the two light paths.
The aperture is expediently arranged not as usual around the optical axis of the optical system rather asymmetrically to the optical axis, in particular asymmetrically to the optical axis of the two light paths at the location of the aperture. In particular, it is arranged completely outside of the optical axis. Through this it is possible to achieve in a simple manner a selection of the one or other light path for a separate analysis at different positions of the measurement light focus.
An especially precise focusing may be achieved if the profiles are detected continuously.
In an advantageous embodiment of the invention, a focus of the optical system is adjusted such that the signals of the detector elements are in a fixed ratio to one another. In the case of an incidence of light on the detector elements in a fixed ratio, a position of symmetry between the light paths and thus the target focus plane may be detected in a simple way. This may be accomplished even more simply if the signals are equally strong. The difference in the path length of the paths is selected in such a way that, when the signals are superimposed, the signals overlap on a flank and therefore have a point of intersection. The signals are equally strong at this point of intersection. With the aid of a zero crossing of the differential signal, the same strength of the signals may be detected in a simple manner.
Another embodiment of the invention provides for a target position of a focus of the optical system to be detected the aid of the signals of the detector elements and for the focus to be adjusted with the aid the detected target position by an actuator. The target position may be a position output by the actuator, e.g., the position at which the signals of the detector elements are equal. It is also possible to use this adjustment only as a pre-adjustment. Alternatively or e.g., as a precision adjustment, it is conceivable to reach the target position by a regulating process, wherein the detector signals are used as regulating input signals and a signal for controlling the actuator is used as a regulating output signal.
A simple and reliable automatic focusing may be achieved if the detector elements are calibrated so that the strength of their signal, which is caused by light reflected by an interface layer, is the same. Through this the focus position is expediently adjusted in the reflecting layer or the layer reflecting light. Alternatively, the detector elements may be adjusted in such a way that their signal strength is different in a targeted manner, e.g., in order to achieve a targeted focus offset.
A good orientation when searching for the target position of the focus or the target focus plane may be achieved if the measurement light focus is moved through the target focus plane toward a sample/air interface and the reflection of the sample/air interface is used for a rough orientation.
To examine a sample it may be necessary that it be examined at different locations, e.g., if it is larger than the field of view of a microscope. To do so, after a first examination, it is moved perpendicularly to the optical axis of the optical system after and then reexamined. A quick automatic focusing after such a movement can be achieved, [in that] the signals of the detector elements after movement of the sample perpendicular to the optical axis of the optical system are checked for plausibility with respect to the rough adjustment still in effect on the target focus plane. If there is plausibility, it is possible to dispense with a time-consuming complete re-focusing. The plausibility may be a limit value in the difference of the signals, which may not be exceeded. In addition, plausibility testing may also be used for making a rough adjustment so that, if there is plausibility, only a fine adjustment still needs to be made.
Another advantageous embodiment of the invention provides for the light source to have a light pattern, which is projected in the sample. The light pattern may be one-dimensional, two-dimensional or three-dimensional and is expediently projected in a plane perpendicular to the optical axis of the optical system in the sample. In this case, the reflected light is detected from several pattern points of the light pattern respectively separated by light paths. As a result, a tilting of the target focus plane, e.g., to the optical axis, can be detected from several target positions of the several pattern points. The signals generated in this way may be utilized for regulating the autofocusing.
Furthermore, the invention is directed at an autofocus device with an optical system for focusing light at a measurement light focus in a sample and for guiding light reflected from there through an aperture onto at least two detector elements.
It is proposed that the autofocus device include an actuator and control means for moving an element of the optical system or the sample via the actuator in such a way that the measurement light focus is moved in layers of the sample which reflect light to different extents, wherein the detector elements are arranged in such a way that, in this case, profiles of a radiation property registered by the detector elements are different and the control means is provided for evaluating the profiles at several positions of the measurement light focus.
In the case of the movement of the element of the optical system relative to the sample, the actuator may move the element or the sample relative to a fixed reference point, e.g., the ground.
The control means is advantageously designed to control one, several or all of the above-mentioned process steps.
The autofocus device advantageously includes a measuring system which is provided to detect the distance of the element of the optical system from the sample or a distance dependent thereon, in particular in a non-optical way. As soon as the focus position is found optically, the distance may be measured with the further measuring system and maintained during illumination of the sample.
Using a color camera with a color-sensitive detector is known for making color images. A color-sensitive detector is normally limited to three colors. When using a Bayer pattern, a pixel is respectively made up of one blue, one red and two green-sensitive detector cells from whose signals all intermediate colors are composed. Because of the required four detector cells per pixel, the resolution of this type of a color detector is low.
A line spectrograph is known for achieving a high image resolution in conjunction with a high color resolution. An object is scanned line-by-line, wherein the image of a line is spread out spectrally, for example by a prism, so that a three-dimensional image of the line develops. In this way, a three-dimensional image is acquired and saved line-by-line and the individual images are assembled to form a three-dimensional color image.
Also known is moving several color filters in succession in front of the detector and thus successively making several images of an object in different frequency ranges. The shots may be combined into a hyperspectral image.
An object of the invention is disclosing a method for taking an image of an object with which high-resolution color images can be made.
This object is attained by a method for taking an image of an object in which the object is projected by an optical system onto several detector elements of a detector and the image is separated with a light filter with several filter areas that filter differently into several image areas that filter differently. According to the invention, it is proposed that the differently filtered image areas of the image are projected simultaneously onto the detector, in particular side by side. It is possible to dispense with changing filters in front of the detector and it is also possible to take shots of the object in rapid succession.
The light filter may be a spectral filter or a polarizing filter. It may be arranged immediately in front of the detector or directly on the detector. The light filter is expediently arranged in an image plane of the optical system, wherein 1/10 of the focal distance of the optical system is tolerable on the light filter as the distance from the mathematical image plane and can be viewed as still in the image plane. The light filter may be a cut-off filter, interference filter or absorber filter. The filter areas may be different in terms of spectral filtering so that the image areas are filtered differently spectrally. They may adopt concrete forms, for example strips, a checkerboard pattern and/or a micropattern, in which the filter areas have a length and/or width less than 100 μm. In terms of their expansion, the filter areas are expediently greater than two, in particular greater than ten, detector elements or pixels. A spatially continuously varying filter characteristic is likewise possible.
The detector may be designed as a chip and is expediently sensitive in all spectral ranges of the filter areas. It may be a CCD (charge-coupled device) or a CMOS sensor (complementary metal oxide semiconductor). The detector elements are advantageously arranged as a two-dimensional lattice in the form of a matrix. The detector is designed expediently as a monochrome detector and is sensitive in the spectral range of the structured filter.
The dimensions of the filter areas are advantageously adapted to the dimensions of the detector elements, for example in that a width and/or length of a filter area are respectively an integer multiple of a quantity of one of the detector elements of detector, for example m×n detector elements. The light filter may be fastened directly on the detector, for example directly on a detector chip, or be deposited directly on the sensitive areas of the chip.
The filter areas correspond expediently to a structure and/or organization of the sample image or the sample or the sample receptacles. They may be as large as regular structural sections of the sample image and/or have their shape.
In an advantageous embodiment of the invention, the filter areas are moved from one to the next shot of the object via an image pattern of the image so that every point of the image pattern is recorded in several light characteristics, in particular spectra. These values are expediently allocated to the point and can be depicted and/or stored. The image pattern in this case may be the entire image or a section of the image. The movement is a relative movement, wherein the filter areas may rest, e.g., relative to a detector housing, and the image pattern is moved, or vice versa.
In the case of multiple shots of each point in several light characteristics, for example colors, a color image may be composed from the several shots. Due to the separation of the light filter into the filter areas, when moving the light filter, only one small movement in the size of a dimension of a single filter area is sufficient so that it is not necessary for the entire light filter to be moved away from the detector and a new light filter to be moved to the detector. Because of the short path of movement, the movement may be executed very quickly.
To achieve a high light yield, it is advantageous if signal contributions in the individual detector elements within a filter area, which may be allocated with a movement of the sample image relative to the detector of a sample area, are accumulated in a value. Such accumulated values of a sample area from different filter areas may be combined into overall information about the light characteristics of the sample area.
The image pattern may remain at rest at least during the movement of the filter areas to the detector, so that the filter areas are moved relative to the detector. It is also possible to move the image over the detector so that the filter areas are at rest relative to the detector. Moving the image over the detector may be accomplished with a movement of the optical system or a portion of the optical system relative to the detector. A further possibility is moving the filter and the detector relative to the optical system, which, for example, is at rest relative to a camera housing. Generally speaking, one or two of the three elements of detector, light filter and optical system may be kept at rest relative to a camera housing for example, whereas the remaining two elements or the remaining one element are/is moveable relative to the other elements.
The optical system is advantageously part of a microscope, whose housing is solidly connected to a specimen stage on which the object can be moved in the form of a sample, e.g., on a moveable tray, in particular with the aid of motorized drive and a position control via a control means.
The object is advantageously moved relative to the optical system and the light filter and the object is acquired in several shots respectively image-section-by-image-section, wherein the position of the filter areas in the image sections is respectively unchanged. The optical system in this case may be at rest in a housing, for example of the microscope, and the object and with it the light pattern of the object are guided past the optical system, wherein the image sections and with them the filter areas move over the entire image.
If, for example, the object is elongated, e.g., in the form of a row of samples then it is possible to record the entire object through a plurality of successively recorded image areas, wherein each image point of the object was recorded in many colors or in each color or through each filter area. In this way, it is possible to make a color image of the entire object very rapidly. It is possible to dispense with moving the color filter relative to the optical system or to the detector. Because a device for taking images of samples is frequently guided via an actuator for controlled movement of samples along the recording device, for example the microscope, the recording device may be remain held hereby in an especially simple manner.
The filter areas are expediently designed as strips, which extend from one side of the image to the opposite side of the image and are aligned in terms of their longitudinal direction perpendicular to the movement direction. Also an extension from one side of the image section to the opposite side of the image section is sufficient. Every image point of the object may hereby be guided over all filter areas of the light filter in an especially simple manner.
The movement is advantageously such that an image point is moved by the width of a filter area from one shot to the next shot. The width is expediently respectively several pixels. A small overlapping area, e.g., corresponding to double the precision of the moved actuator, is meaningful in this case.
In order to achieve an especially high resolution, in particular in the case of especially interesting image sections, it is advantageous if the movement from one image acquisition to the next image acquisition is less than one image pixel. Because of the movement in the subpixel range, a subpixel resolution can be computed.
The default for the movement is advantageously specified by a control means, which in particular independently detects especially interesting image areas and triggers a subpixel movement. The movement may be carried out in different modes, e.g., a subpixel mode, a filter area-wide mode in which the movement from one shot to the next shot amounts to the width of a filter area, or a multi-pixel mode in which the movement amounts to several pixels of the detector. A control of only two of the three described modes is also possible.
In an advantageous embodiment of the invention, the light filter is a cut-off filter, whose cut-off frequency varies in the spatial profile of the light filter perpendicular to the cut-off and in particular perpendicular to the movement. In this way, by controlling the movement increment from one shot to the next shot, the color resolution of an entire image can be controlled from the shots. The light filter is connected expediently to an actuator and a control means, which is used to control a movement of the light filter.
In addition, it is proposed that the light filter include two cut-off filters arranged in succession in the optical path of the image, whose cut-off frequency in the spatial profile of the cut-off filters respectively perpendicular to the cut-off (and in particular perpendicular to the movement) varies with a frequency response that is opposite from one another. With a corresponding arrangement of the cut-off filters with respect to one another, a spatial transmission window can be generated hereby, which may be both spatially as well as spectrally enlarged and reduced by moving the cut-off filters to one another. As a result, this allows a high variability to be achieved in the frequency and spatial area of the recorded images.
An especially good spectral adaptation of the detector and light filter may be achieved, if the detector has several detector regions that are different in terms of color sensitivity and respectively sensitive in one color area and at least one respective filter area is arranged in front of every detector region. This is advantageously adapted in terms of its color area to the color area of the detector so that the color areas of the light filter are different. The adaptation is accomplished advantageously in that the transmission of the filter area is in the sensitivity range of the corresponding detector region and is not in a color area of one of the other detector regions.
The different detector regions may be arranged spatially directly side by side, for example in a cohesive matrix of detector elements, or set up spatially separated from one another so that the optical system includes one or more elements for guiding the image of the object to several detector regions, for example a dichroic mirror or the like. The detector regions are advantageously operated synchronously so that an image of the object at the detector regions is taken simultaneously in several color channels.
Another advantageous embodiment of the invention provides that the filter areas have different transmission values and the transmission values are respectively adapted to a recording characteristic of the detector, in particular to achieve a uniform image exposure with an inconstant spectral sensitivity of the detector elements. In this way, it is possible to achieve an especially good image result. The adaptation may be effected by a different size of the filter areas. Another possibility is adapting a different frequency transmission width of the filter areas to the detector. Thus, a frequency transmission width may be greater in a frequency range in which the detector is less sensitive, and less in a frequency range in which the detector is more sensitive.
In addition, it is also possible to adapt the transmission strength, i.e., a damping of the filter area, to the detector so that a higher damping is selected in a frequency range in which the detector is more sensitive than in other frequency ranges.
A high image quality may likewise be achieved if the filter areas have different transmission values and a triggering of the detector elements is adapted to a transmission value of the filter area shading it. Thus, an amplification can be boosted, an integration time can be lengthened or pixels can be consolidated, if a filter area has a high damping as compared with another filter area. This makes it possible to achieve a uniform exposure of the image in all frequency ranges. If the transmission value of a filter area is especially high, it is also possible to read out only every second pixel.
The different triggering of the detector elements advantageously follows a movement of the filter areas over the detector. If, for example, the light filter is moved over the detector, this movement may be detected so that each filter area may be allocated the detector elements that are covered by the area. Control of the detector elements can be adapted hereby pixel-by-pixel to the respectively allocated filter area.
Furthermore, the invention is directed at a device for taking an image of an object with a detector, which has several detector elements, an optical system for projecting the object onto the detector and a light filter with several filter areas that filter differently.
Especially high-resolution color images can be made, if the light filter is arranged so that several image areas of the image of the object are projected simultaneously onto the detector; these image areas are filtered through the filter areas differently.
The device includes a control means, which is advantageously provided to control one, several or all of the above-mentioned process steps.
The invention will be explained in more detail on the basis of exemplary embodiments, which are depicted in the drawings.
The autofocus device 2 includes a light source 12, which makes light available for the autofocus method. It may also provide the light for the fluorescence analysis, wherein, as a rule, it is more expedient for the optical imaging system 4 to have another light source (not shown) for this. The light source 12 has a light generator 14, e.g., a LED (light emitting diode), and optics 16 for shaping the radiated light, which may include a light diffuser. A diaphragm 18 with an opening pattern generates a one-dimensional or two-dimensional light source pattern, which is expediently symmetrical to an optical axis 20 of an optical system 22, which may include additional optical elements 24 and an objective 26 of the optical imaging system 4 besides the optics 16. A spatially defined light source may also replace the elements 16 and 18. A means 28, which amounts to an aperture, separates the illumination of the sample 6 from the light source 12 into several light paths, which run separated from one another from the means 28 to the sample 6 and are brought to a common measurement light focus (illumination paths) in the sample 6. The means 28 may alternatively be attached in the detection path (see below) between elements 30 and 46, in particular when focusing on scattering objects.
Light from the light source 12 is directed to the objective 26 of the optical imaging system 4 via two beam splitters 30, 32 in the form of dichroic or semi-transparent mirrors; the optical imaging system is mounted in a microscope housing 34 and focuses the light on the sample 6. To do so, the objective 26 has an optical element 36, e.g., a lens, which is movable in a controlled manner along the optical axis 20 of the objective 26 by means of an actuator 38. Controlling the position of the optical element 36 and therefore of the focus in the sample 6 is accomplished by the control means 10. The actuator itself may include an independent distance meter.
Light reflected from the sample 6 passes through the objective 26 in the opposite direction, as indicated by a dashed arrow, and is guided via the beam splitter 32, on the one hand, to optics 40 and to the image detector 8 and, on the other hand, via the beam splitter 30 and additional optics 42 to a detector 44, which includes several detector elements (detection path). The detector elements may be individual sensors, e.g., photodiodes, or a lattice of sensors. Arranged in front of the detector 44 is a diaphragm of the optical system 22 with an aperture 46, which is shaped in accordance with the aperture of diaphragm 18 and is arranged in the image plane of the optical system 22 in which an image of the sample 6 is generated and therefore of the light source pattern projected on the sample 6. The diaphragm opening 46 may include one or several openings and is designated in the following only as aperture 46. The detector 44 supplies its signals to the control means 10, which evaluates them and uses them as a control or regulation input for controlling the actuator 38. In addition, the control means may process the independent distance signal of the actuator 38 and optionally use it for regulation.
Light from both light paths 48, 50 is focused at a punctiform measurement light focus 52 in the sample 6, which may have the shape of the light source and e.g., is punctiform, is elongated corresponding to a slit-shaped light source or has another optional shape. Since both the light for the measuring light from the light source 12 and the light for examining the sample are guided through the objective 26, the measurement light focus 52 may be in the focus of the camera or the optical imaging system 4, which may be a focus plane. However, it is also possible for the measurement light focus 52 to be removed from a focus 56 of the camera by a pre-known distance 54.
The typical sample 6 includes a specimen slide 58 on which biological sample material 60 is applied, which is covered with a thin transparent cover slip 62. This sample 6 reflects incident light on three interfaces 64, 66, 68, namely the strongly reflecting air/glass interface 64, the considerably less strong reflecting glass/sample material interface 66 and the sample material/glass interface 68 (which is not considered further in the following), wherein, in the case of very thin sample materials, the signals develop a combination from the interfaces 66 and 68. In this case, the glass/sample material interface 66 forms a target focus plane 70 described in this first exemplary embodiment, in which the measurement light focus 52 is supposed to be guided by the autofocus method.
The autofocus method carried out for this is described on the basis of
The portions of the two light paths 48, 50 that are incident on the sample 6 are depicted by thin dots and are directed at the measurement light focus 52, which is in the specimen slide 58, i.e., beneath the target focus plane 70, which is identical to interface 66. The different light paths from the different interfaces 64, 66 to the aperture 46 or to the detector elements 72, 74 are depicted in different ways. The light path of the main reflection that is reflected from the strongly reflecting interface 64 is represented by solid lines and the light path of the light that is reflected by the less strongly reflecting interface 66 is represented by dashed lines. It is evident that, for one, no light or negligibly little light is reflected in the measurement light focus 52 and, secondly, the light reflected by the interfaces 64, 66 misses the aperture 46 so that no light from it reaches the detector elements 72, 74.
In
With a further movement of the sample 6 downward or of the measurement light focus 52 in the sample 6 upwards, the measurement light focus 52 reaches the interface layer 66 and the target focus plane 70, as depicted in
The aperture 46 is arranged in the image plane of the objective 26. Light reflected from the measurement light focus 52 passes through the aperture 46 and namely expediently to an equal extent from both light paths 48, 50. The aperture 46 in this case is arranged so that light which is reflected from above or below the measurement light focus 52 passes through the aperture 46 from the two light paths 48, 50 to an unequal extent. An equally strong illumination of the detector elements 72, 74 therefore means that one of the interface layers 64, 66 lies in the measurement light focus. The aperture in this case is advantageously only so large that light from an interface layer 64, 66 which is further than 100 μm away from the measurement light focus 52 cannot pass through the aperture 46 from any of the light paths 48, 50.
The aperture 46 makes it possible to select the light from different light paths according to the optical path length. Similarly, a selection of the light from different light paths is made possible according to their different direction[s] toward the detector elements 72, 74.
To automatically focus the sample 6, first of all the light generator 14 of the autofocus light source 12 is switched on and the objective 26 or its optical element 36, which is moveable via the actuator 38 into its initial position (in the figures completely downward in the direction of the sample 6), moves so that the measurement light focus 52 is located within the sample 6 and there it is expediently located within the specimen slide 58.
Now the actuator 38 is moved in such a way that the measurement light focus 52 is moved completely through the sample material 60 and through the target focus plane 70. At the same time, the signals 76, 78 of the detector elements 72, 74 are continuously recorded and a position signal of the actuator 38 is expediently recorded as well. To begin with, the signal 76 of the detector element 72 increases and then quickly drops again. Then the signal 78 of the detector element 74 increases and drops again, both according to the incidence of light through the aperture 46 as described in
In particular, the position of the intersection of the flanks of the signals 76, 78, called the target position in the following, is recorded, in which the measurement light focus 52 is located in the target focus plane 70. This target position is detected by the control means 10, which is connected to the actuator 38, which transmits its position or that of the optical element 36 to the control means 10 continuously or at the request of the control means 10.
Again the sharp increase first of the signal 76 and then of the signal 78 over a limit value g is taken as a sign and orientation that the measurement light focus 52 is approaching the strongly reflecting interface 64 and therefore is located above the target focus plane 70. The movement of the measurement light focus 52 upwards is stopped.
Now the actuator 38 may be adjusted in a simple process step in accordance with the detected target position and the sample 6 is focused very swiftly. The measurement light focus 52 is adjusted to the target focus plane 70 and thus also the focus of the microscope 4, when the measurement light focus 52 is located in this focus. Otherwise, the focus is adjusted to a desired plane, which is removed by a known distance from the target focus plane 70.
A more precise focusing is achieved if the movement of the measurement light focus 52 is reversed and this time the measurement light focus 52 is guided into the sample material 60 more slowly, as shown in
An alternative method may be begun so that the measurement light focus 52 is located above the sample 6 and run into the sample 6 from there. The first incident main reflection from the glass/air interface layer 64 is clearly identified. Because the thickness of the cover slip 62 is known, e.g., 170 μm, the measurement light focus 52 may be moved downwards swiftly by this or a somewhat shorter distance. Then the movement speed can be reduced and the measurement light focus 52 moved further downwards until the signals 76, 78 are equally strong.
A regulation to the target position based on the signals 76, 78 is explained in the following on the basis of
After the adjustment or setting of the focus position, the light generator 14 is switched off and the focus position is regulated or maintained by means of the position signal of the actuator 38. The advantage of this is that the autofocus light pattern is not projected with the camera during the exposure. Optionally, the light generator 14 may remain switched on continuously and regulation is carried out according to the differential signal 82.
Now, images of the sample 6 or of the sample material 60 may be recorded, if need be at several z-positions. Said positions may be approached by a corresponding control of the actuator 38. It is also possible to reach these via a signal shift of one or both signals 76, 78.
To record several images of a large sample 6, said sample is moved in the x-y direction 88, i.e., perpendicular to the s-axis or the optical axis 20, as indicated in
In
Also in the exemplary embodiments in
A ray reflected or scattered by the sample 106 is directed in an optical path (indicated with a dashed arrow) through the dichroic mirror 114 and optical elements 124 (indicated only generally) of the optical system 112 into a camera 126, which features a detector 128 with a light filter 130. The detector 128 includes a plurality of detector elements 132 arranged in a two-dimensional matrix, which are designed as CCD elements and attached on a chip. The light filter 130 is a spectral filter with several filter areas 134 that are different in terms of spectral filtering, which are also arranged on the chip and in the optical path directly in front of the detector elements 132.
The specimen stage 104 and along with it the sample 106 are movable with the aid of an actuator 136 perpendicularly to the optical axis 122 of the objective 116, as indicated by arrows 138, so that several shots of the sample 106 may be taken in different positions of the sample 106 in relation to the microscope 102. The actuator 136 may be triggered by a control means 140 of the microscope 12 in such a way that a travel distance of the sample 106 from shot to shot may be adjusted to a predetermined value or a value calculated by the control means. The control means 140 may also be the control means 140 of the camera 126 or an additional control means of the microscope 102 outside of the camera 126.
Because of the control means 140, an actuator 142 of the light filter 130 and/or an actuator 144 of the detector 128 may be triggered as an alternative or in addition to the actuator 136 so that the filter areas 134 and/or the detector elements 132 may be triggered and are movable relative to the optical system 112 perpendicular to the optical axis 122 of the optical path incident in the camera 126. An image of an object of the sample 16 may migrate hereby in one or more ways via the light filter 130 and/or detector 128.
An alternative embodiment of a detector 146 with several detector regions 148, 150, 12 is shown in
Two dichroic beam splitters 154, 156 direct a ray reflected by the sample 6 divided by three spectral ranges to detector regions 148, 150, 152. The detector regions 148, 150, 152 are respectively sensitive in only one of the spectral ranges or are more sensitive than in the other spectral ranges. Arranged respectively in front of every detector region 148, 150, 152 is a filter area 158, 160, 162, wherein the filter areas 158, 160, 162 are only transparent in one of the spectral ranges or are more transparent than in the other spectral ranges. Their transparency is adjusted spectrally to the respective detector region 148, 150, 152 allocated to them. One or all of the filter areas 158, 160, 162 may be divided in turn into sub-areas that are different in terms of spectral filtering, as shown in
The detector 128 and its 11×15 rectangular detector elements 132 are depicted with dashed lines for the sake of clarity, whereas the light filter 130 with its 5 strip-shaped filter areas 134 is depicted with solid lines. The strips of the filter areas 134 are arranged perpendicularly to the movement direction of the specimen stage, which is depicted by an arrow 138. To better differentiate the lines, the image of the sample is depicted by dashed-and-dotted lines.
For a next image, the image of the sample 106 is moved further by the distance of the width of the filter areas 134, wherein the width is viewed in the direction of the movement of the sample 106. Now another image section of the sample 106 is taken, wherein this image section covers another sample section and other objects 166. The position of the filter areas 134 in the image sections remains the same, but not relative to the sample sections and objects 166. With the second image, the sample receptacles 164 that are depicted again are depicted in another spectrum, i.e., in another color.
The sample 106 is depicted in its entirety, in that the sample 106 is depicted image-section-by-image-section on the detector 128 and several partially overlapping images of the sample 106 and the objects 166 are made. In this case, at least as many images as there are different filter sections 134 are taken. A multi-color image of the sample 106 or of an object 166 is respectively generated, e.g., by the control means 140, from as many overlapping images as there are different filter sections 134.
The shots are evaluated in this case by an evaluation means, which may be the control means 140, which is connected to the detector 128 by signals. This process identifies when an object 168 is of special importance and should be depicted in high resolution. If this requirement is detected, then the sample 106 is moved from one shot to the next by only less than one pixel length, i.e., the length of a detector element 132, as
As an alternative to a movement of the sample 106 to the microscope 102, the light filter 130 and/or the detector 128 may be moved relative to the sample 106 and for example relative to the microscope housing 120.
In a further embodiment, the charge of the individual detector elements within a filter area may be displaced with the image of the sample detector element-by-element and read out only after one or more displacements. Or the charges that are allocated to a sample position during the displacement of the sample image within a filter area may be assigned a pixel spectral value. In this way, the charge generated by the light may be accumulated by the sample over a longer time.
The z-direction is the direction of the optical axis 122 at the entrance to the camera 126. In addition, the absorption A of the light filter 170 is depicted. The higher the transmission of the light filter 170, the smaller the absorption A. In the hatched area, the absorption is ideally close to 100% and the light filter 170 is not transparent. The light filter 170 is a cut-off filter with an edge 172 with a specific wavelength λ. The wavelength λ, is a function of the position of the edge 172 in the x-direction of the filters 170. The wavelength λ of the edge 172 is higher further to the right in the filter than further to the left. In the depicted example, the change of the wavelength of the edge per distance of the light filter is constant in the x-direction. Other relations with linear or non-linear changes are also conceivable. In the case of the light filter 170, a great many or infinite numbers of filter areas that are different in terms of spectral filtering are located very close or infinitely close side by side.
When using the light filter 170 instead of the light filter 130 in
If two cut-off filters 174, 176 with an opposing edge profile are arranged in succession, as shown in
Adapting the light filter 130 to the detector 128 is shown in
Another possibility for achieving the most uniform possible exposure of the shots of the sample 106 over the entire relevant spectral range is undertaking an electronic adaptation of the detector elements 132 on the filter area 134 located in front of it. In the case of a less transmissive filter area 134, a detector element 134 allocated to this filter area 134 may be triggered in a different manner than a detector element 132, which is allocated to a higher transmissive filter area 134. The different triggering may be achieved by a different adjustment of the gain and/or the integration time of the detector elements 312. A pixel binning, i.e., combining two or more pixels or detector elements 132, is conceivable, just like a subsampling, i.e., a reading out of only every nth detector element 132, with n=1, 2, 3, etc. The corresponding control may be undertaken by the control means 140.
In an especially advantageous exemplary embodiment, in the case of the electronic adjustment of the detector elements 132, a displacement of the filter areas 134 in front of the detector elements 132 is taken into consideration. To this end, the position of the light filter 130 in relation to the detector 128 must be known, e.g., through position signals from one of the actuators 142, 144.
Number | Date | Country | Kind |
---|---|---|---|
10 2009 012 292 | Mar 2009 | DE | national |
10 2009 012 293 | Mar 2009 | DE | national |
The application is a divisional of co-pending U.S. patent application Ser. No. 13/255,827, filed Sep. 9, 2011, which claims benefit from U.S. National Phase application under 35 U.S.C. § 371 of International Application No. PCT/IB2010/000518, filed Mar. 11, 2010, entitled AUTOFOCUS METHOD AND AUTOFOCUS DEVICE and incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
2051051 | Lilienfeld | Aug 1936 | A |
3309262 | Copeland et al. | Mar 1967 | A |
3525803 | Smart | Aug 1970 | A |
3762798 | Grubb et al. | Oct 1973 | A |
3765851 | White | Oct 1973 | A |
3862909 | Copeland | Jan 1975 | A |
4000417 | Adkisson et al. | Dec 1976 | A |
4079248 | Lehureau et al. | Mar 1978 | A |
4089989 | White et al. | May 1978 | A |
4148752 | Burger et al. | Apr 1979 | A |
4404683 | Kobayashi et al. | Sep 1983 | A |
4477185 | Berger et al. | Oct 1984 | A |
4595829 | Neumann et al. | Jun 1986 | A |
4673988 | Jansson et al. | Jun 1987 | A |
4684799 | Emoto et al. | Aug 1987 | A |
4737022 | Faltermeier et al. | Apr 1988 | A |
4760385 | Jansson et al. | Jul 1988 | A |
4761075 | Matsushita et al. | Aug 1988 | A |
4836667 | Ozeki | Jun 1989 | A |
4849177 | Jordan | Jul 1989 | A |
4958920 | Jorgens et al. | Sep 1990 | A |
4962264 | Forester | Oct 1990 | A |
4984229 | Nedvidek | Jan 1991 | A |
5180606 | Stokes et al. | Jan 1993 | A |
5287272 | Rutenberg et al. | Feb 1994 | A |
5297034 | Weinstien | Mar 1994 | A |
5297215 | Yamagishi | Mar 1994 | A |
5311426 | Donohue et al. | May 1994 | A |
5367401 | Saulietis | Nov 1994 | A |
5428690 | Bacus et al. | Jun 1995 | A |
5473706 | Bacus et al. | Dec 1995 | A |
5530237 | Sato et al. | Jun 1996 | A |
5532874 | Stein | Jul 1996 | A |
5546323 | Bacus et al. | Aug 1996 | A |
5561556 | Weissman et al. | Oct 1996 | A |
5581637 | Cass et al. | Dec 1996 | A |
5655028 | Soli et al. | Aug 1997 | A |
5659174 | Kaneoka et al. | Aug 1997 | A |
5675141 | Kukihara | Oct 1997 | A |
5686960 | Sussman et al. | Nov 1997 | A |
5696589 | Bernacki | Dec 1997 | A |
5737084 | Ishihara | Apr 1998 | A |
5768033 | Brock | Jun 1998 | A |
5793969 | Kamentsky et al. | Aug 1998 | A |
5836877 | Zavislan | Nov 1998 | A |
5864138 | Miyata et al. | Jan 1999 | A |
5891619 | Zakim et al. | Apr 1999 | A |
5924074 | Evans | Jun 1999 | A |
5947167 | Bogen et al. | Sep 1999 | A |
6008892 | Kain et al. | Dec 1999 | A |
6031930 | Bacus et al. | Feb 2000 | A |
6043475 | Shimada et al. | Mar 2000 | A |
6061176 | Shih | May 2000 | A |
6078681 | Silver | Jun 2000 | A |
6091075 | Shibata et al. | Jul 2000 | A |
6091842 | Domanik et al. | Jul 2000 | A |
6101265 | Bacus et al. | Aug 2000 | A |
6130745 | Manian et al. | Oct 2000 | A |
6147797 | Lee | Nov 2000 | A |
6205235 | Roberts | Mar 2001 | B1 |
6208374 | Clinch | Mar 2001 | B1 |
6215892 | Douglass et al. | Apr 2001 | B1 |
6226352 | Salb | May 2001 | B1 |
6226392 | Bacus et al. | May 2001 | B1 |
6248995 | Tanaami et al. | Jun 2001 | B1 |
6272235 | Bacus et al. | Aug 2001 | B1 |
6309607 | Johnston et al. | Oct 2001 | B1 |
6396941 | Bacus et al. | May 2002 | B1 |
6404906 | Bacus et al. | Jun 2002 | B2 |
6466690 | Bacus et al. | Oct 2002 | B2 |
6522774 | Bacus et al. | Feb 2003 | B1 |
6529271 | Engelhardt | Mar 2003 | B1 |
6606413 | Zeineh | Aug 2003 | B1 |
6671393 | Hays et al. | Dec 2003 | B2 |
6674881 | Bacus et al. | Jan 2004 | B2 |
6674884 | Bacus et al. | Jan 2004 | B2 |
6678398 | Wolters et al. | Jan 2004 | B2 |
6684092 | Zavislan | Jan 2004 | B2 |
6711283 | Soenksen | Mar 2004 | B1 |
6735531 | Rhett et al. | May 2004 | B2 |
6775402 | Bacus et al. | Aug 2004 | B2 |
6800249 | de la Torre-Bueno | Oct 2004 | B2 |
6800853 | Ohkura | Oct 2004 | B2 |
6812446 | Kreh | Nov 2004 | B2 |
6834237 | Noergaard et al. | Dec 2004 | B2 |
6838650 | Toh | Jan 2005 | B1 |
6847481 | Ludl et al. | Jan 2005 | B1 |
6847729 | Clinch et al. | Jan 2005 | B1 |
6947583 | Ellis et al. | Sep 2005 | B2 |
6959720 | Kurihara et al. | Nov 2005 | B2 |
6982741 | Fiedler | Jan 2006 | B2 |
6993169 | Wetzel et al. | Jan 2006 | B2 |
7009638 | Gruber et al. | Mar 2006 | B2 |
7016109 | Nakagawa | Mar 2006 | B2 |
7027627 | Levin et al. | Apr 2006 | B2 |
7031507 | Bacus et al. | Apr 2006 | B2 |
7071969 | Stimson | Jul 2006 | B1 |
7098634 | Yu | Aug 2006 | B1 |
7110586 | Bacus et al. | Sep 2006 | B2 |
7110645 | Birk et al. | Sep 2006 | B2 |
7133545 | Douglass et al. | Nov 2006 | B2 |
7136518 | Griffin et al. | Nov 2006 | B2 |
7141802 | Takeyama et al. | Nov 2006 | B2 |
7146372 | Bacus et al. | Dec 2006 | B2 |
7149332 | Bacus et al. | Dec 2006 | B2 |
7171030 | Foran et al. | Jan 2007 | B2 |
7194118 | Harris et al. | Mar 2007 | B1 |
7196300 | Watkins et al. | Mar 2007 | B2 |
7209287 | Lauer | Apr 2007 | B2 |
7212660 | Wetzel et al. | May 2007 | B2 |
7224839 | Zeineh | May 2007 | B2 |
7233340 | Hughes et al. | Jun 2007 | B2 |
7248403 | Nakagawa | Jul 2007 | B2 |
7250963 | Yuri et al. | Jul 2007 | B2 |
7292251 | Gu | Nov 2007 | B1 |
7297910 | Fomitchov | Nov 2007 | B2 |
7301133 | Weiss | Nov 2007 | B2 |
7349482 | Kim | Mar 2008 | B2 |
7359548 | Douglass et al. | Apr 2008 | B2 |
7391894 | Zeineh | Jun 2008 | B2 |
7394482 | Olschewski | Jul 2008 | B2 |
7394979 | Luther et al. | Jul 2008 | B2 |
7396508 | Richards et al. | Jul 2008 | B1 |
7400342 | Gaida et al. | Jul 2008 | B2 |
7400983 | Feingold et al. | Jul 2008 | B2 |
7406215 | Clune et al. | Jul 2008 | B2 |
7421102 | Wetzel et al. | Sep 2008 | B2 |
7426345 | Takamatsu et al. | Sep 2008 | B2 |
7428325 | Douglass et al. | Sep 2008 | B2 |
7433026 | Wolpert et al. | Oct 2008 | B2 |
7456377 | Zeineh et al. | Nov 2008 | B2 |
7463761 | Eichhorn et al. | Dec 2008 | B2 |
7470541 | Copeland et al. | Dec 2008 | B2 |
7482600 | Seyfried | Jan 2009 | B2 |
7483554 | Kotsianti et al. | Jan 2009 | B2 |
7486329 | Endo | Feb 2009 | B2 |
7502519 | Eichhorn et al. | Mar 2009 | B2 |
7542596 | Bacus et al. | Jun 2009 | B2 |
7550699 | Marshall | Jun 2009 | B1 |
7584019 | Feingold et al. | Sep 2009 | B2 |
7596249 | Bacus et al. | Sep 2009 | B2 |
7602524 | Eichhorn et al. | Oct 2009 | B2 |
7623697 | Hughes et al. | Nov 2009 | B1 |
7630113 | Sase et al. | Dec 2009 | B2 |
7633616 | Hing | Dec 2009 | B2 |
7642093 | Tseung et al. | Jan 2010 | B2 |
7653300 | Fujiyoshi et al. | Jan 2010 | B2 |
7657070 | Lefebvre | Feb 2010 | B2 |
7663078 | Virag et al. | Feb 2010 | B2 |
7677289 | Hayworth et al. | Mar 2010 | B2 |
7689024 | Eichhorn et al. | Mar 2010 | B2 |
7738688 | Eichhorn et al. | Jun 2010 | B2 |
7756309 | Gholap et al. | Jul 2010 | B2 |
7756357 | Yoneyama | Jul 2010 | B2 |
7778485 | Zeineh et al. | Aug 2010 | B2 |
7822257 | Endo et al. | Oct 2010 | B2 |
7840300 | Harker | Nov 2010 | B2 |
7856131 | Bacus et al. | Dec 2010 | B2 |
7860292 | Eichhorn et al. | Dec 2010 | B2 |
7864414 | Sase et al. | Jan 2011 | B2 |
7869641 | Wetzel et al. | Jan 2011 | B2 |
7873193 | De La Torre-Bueno et al. | Jan 2011 | B2 |
7876948 | Wetzel et al. | Jan 2011 | B2 |
RE42220 | Clinch et al. | Mar 2011 | E |
7901941 | Tseung et al. | Mar 2011 | B2 |
7912267 | Kawano et al. | Mar 2011 | B2 |
7916916 | Zeineh | Mar 2011 | B2 |
7920163 | Kossin | Apr 2011 | B1 |
7925067 | Bacus et al. | Apr 2011 | B2 |
7944608 | Hayashi et al. | May 2011 | B2 |
7949161 | Kawanabe et al. | May 2011 | B2 |
7957057 | Sase et al. | Jun 2011 | B2 |
7967057 | Kunii et al. | Jun 2011 | B2 |
7978894 | Soenksen et al. | Jul 2011 | B2 |
8000560 | Shirota | Aug 2011 | B2 |
8000562 | Morales et al. | Aug 2011 | B2 |
8036868 | Zeineh et al. | Oct 2011 | B2 |
8074547 | Ito et al. | Dec 2011 | B2 |
8077959 | Dekel et al. | Dec 2011 | B2 |
8085296 | Yuguchi et al. | Dec 2011 | B2 |
8094902 | Crandall et al. | Jan 2012 | B2 |
8094914 | Iki et al. | Jan 2012 | B2 |
8098279 | Sase et al. | Jan 2012 | B2 |
8098956 | Tatke et al. | Jan 2012 | B2 |
8103082 | Olson et al. | Jan 2012 | B2 |
8125534 | Shimonaka | Feb 2012 | B2 |
8159547 | Kawashima | Apr 2012 | B2 |
8174763 | Guiney et al. | May 2012 | B2 |
8187536 | Graupner et al. | May 2012 | B2 |
8199358 | Eichhorn et al. | Jun 2012 | B2 |
8203575 | Molnar et al. | Jun 2012 | B2 |
8283176 | Bland et al. | Oct 2012 | B2 |
8304704 | Hing et al. | Nov 2012 | B2 |
8305434 | Nakatsuka et al. | Nov 2012 | B2 |
8306298 | Bacus et al. | Nov 2012 | B2 |
8306300 | Bacus et al. | Nov 2012 | B2 |
8339703 | Knebel | Dec 2012 | B2 |
8350904 | Fujimoto et al. | Jan 2013 | B2 |
8366857 | Hayworth et al. | Feb 2013 | B2 |
8385619 | Soenksen | Feb 2013 | B2 |
8385686 | Sano | Feb 2013 | B2 |
8388891 | Lefebvre | Mar 2013 | B2 |
8394635 | Key et al. | Mar 2013 | B2 |
8396669 | Cocks | Mar 2013 | B2 |
8463741 | Ehlke et al. | Jun 2013 | B2 |
8473035 | Frangioni | Jun 2013 | B2 |
8476585 | Galloway | Jul 2013 | B2 |
8501435 | Gustafsson et al. | Aug 2013 | B2 |
8565480 | Eichhorn et al. | Oct 2013 | B2 |
8565503 | Eichhorn et al. | Oct 2013 | B2 |
8582489 | Eichhorn et al. | Nov 2013 | B2 |
8582849 | Eichhorn et al. | Nov 2013 | B2 |
8673642 | Key et al. | Mar 2014 | B2 |
8687858 | Walter et al. | Apr 2014 | B2 |
8725237 | Bryant-Greenwood et al. | May 2014 | B2 |
8730315 | Yoneyama | May 2014 | B2 |
8744213 | Tatke et al. | Jun 2014 | B2 |
8747746 | Lefebvre | Jun 2014 | B2 |
8771978 | Ragan | Jul 2014 | B2 |
8788217 | Feingold et al. | Jul 2014 | B2 |
8796038 | Williamson, IV et al. | Aug 2014 | B2 |
8827760 | Ushibo et al. | Sep 2014 | B2 |
8923597 | Eichhorn et al. | Dec 2014 | B2 |
9310598 | Hing et al. | Apr 2016 | B2 |
20010035752 | Kormos et al. | Nov 2001 | A1 |
20020169512 | Stewart | Nov 2002 | A1 |
20020176160 | Suzuki et al. | Nov 2002 | A1 |
20020176161 | Yoneyama et al. | Nov 2002 | A1 |
20030048931 | Johnson et al. | Mar 2003 | A1 |
20030098921 | Endo | May 2003 | A1 |
20030112330 | Yuri et al. | Jun 2003 | A1 |
20030112504 | Czarnetzki et al. | Jun 2003 | A1 |
20030124729 | Christensen et al. | Jul 2003 | A1 |
20030133009 | Brown | Jul 2003 | A1 |
20030142882 | Beged-Dov et al. | Jul 2003 | A1 |
20030156276 | Bowes | Aug 2003 | A1 |
20040021936 | Czarnetzki et al. | Feb 2004 | A1 |
20040027462 | Hing | Feb 2004 | A1 |
20040080758 | Ban et al. | Apr 2004 | A1 |
20040090667 | Gartner et al. | May 2004 | A1 |
20040113043 | Ishikawa et al. | Jun 2004 | A1 |
20040129858 | Czarnetzki et al. | Jul 2004 | A1 |
20040135061 | Kreh | Jul 2004 | A1 |
20040141660 | Barth et al. | Jul 2004 | A1 |
20050057812 | Raber | Mar 2005 | A1 |
20050073649 | Spector | Apr 2005 | A1 |
20050090017 | Morales | Apr 2005 | A1 |
20050092893 | Neuvonen | May 2005 | A1 |
20050094262 | Spediacci et al. | May 2005 | A1 |
20050112537 | Wu | May 2005 | A1 |
20050211874 | Takeyama et al. | Sep 2005 | A1 |
20050219688 | Kawano et al. | Oct 2005 | A1 |
20050221351 | Jekwam | Oct 2005 | A1 |
20050239113 | Ryu et al. | Oct 2005 | A1 |
20050248837 | Sase | Nov 2005 | A1 |
20050258335 | Oshiro et al. | Nov 2005 | A1 |
20060039583 | Bickert et al. | Feb 2006 | A1 |
20060045388 | Zeineh | Mar 2006 | A1 |
20060077536 | Bromage et al. | Apr 2006 | A1 |
20060088940 | Feingold et al. | Apr 2006 | A1 |
20060098861 | See et al. | May 2006 | A1 |
20060146283 | Baumann et al. | Jul 2006 | A1 |
20060164623 | Wagner et al. | Jul 2006 | A1 |
20060171560 | Manus | Aug 2006 | A1 |
20060179992 | Kermani | Aug 2006 | A1 |
20070025606 | Gholap et al. | Feb 2007 | A1 |
20070091324 | Paul et al. | Apr 2007 | A1 |
20070098237 | Yoo et al. | May 2007 | A1 |
20070102620 | Bublitz et al. | May 2007 | A1 |
20070164194 | Kurata et al. | Jul 2007 | A1 |
20070198001 | Bauch et al. | Aug 2007 | A1 |
20070207061 | Yang et al. | Sep 2007 | A1 |
20070224699 | Gates | Sep 2007 | A1 |
20070285768 | Kawanabe et al. | Dec 2007 | A1 |
20080002252 | Weiss et al. | Jan 2008 | A1 |
20080020128 | van Ryper et al. | Jan 2008 | A1 |
20080054156 | Fomitchov | Mar 2008 | A1 |
20080095424 | Iki et al. | Apr 2008 | A1 |
20080095467 | Olszak et al. | Apr 2008 | A1 |
20080142708 | Workman et al. | Jun 2008 | A1 |
20080180794 | Tafas et al. | Jul 2008 | A1 |
20080240613 | Dietz et al. | Oct 2008 | A1 |
20080283722 | Uchiyama et al. | Nov 2008 | A1 |
20090040322 | Leberl et al. | Feb 2009 | A1 |
20090046298 | Betzig | Feb 2009 | A1 |
20090116101 | Tafas et al. | May 2009 | A1 |
20090140169 | Niehren | Jun 2009 | A1 |
20090195688 | Henderson | Aug 2009 | A1 |
20100000383 | Koos et al. | Jan 2010 | A1 |
20100020157 | Jelinek et al. | Jan 2010 | A1 |
20100039507 | Imade | Feb 2010 | A1 |
20100074489 | Bacus et al. | Mar 2010 | A1 |
20100093022 | Hayworth et al. | Apr 2010 | A1 |
20100102571 | Yang | Apr 2010 | A1 |
20100109725 | Yun et al. | May 2010 | A1 |
20100118133 | Walter et al. | May 2010 | A1 |
20100118393 | Lin | May 2010 | A1 |
20100134655 | Kuroiwa | Jun 2010 | A1 |
20100141751 | Uchida | Jun 2010 | A1 |
20100141752 | Yamada | Jun 2010 | A1 |
20100141753 | Olson et al. | Jun 2010 | A1 |
20100171809 | Fujiyoshi | Jul 2010 | A1 |
20100177166 | Eichhorn et al. | Jul 2010 | A1 |
20100188738 | Epple et al. | Jul 2010 | A1 |
20100194873 | Viereck et al. | Aug 2010 | A1 |
20100201800 | Yamamoto et al. | Aug 2010 | A1 |
20100225668 | Tatke et al. | Sep 2010 | A1 |
20100260407 | Eichhorn et al. | Oct 2010 | A1 |
20100279342 | Kijima et al. | Nov 2010 | A1 |
20100295932 | Yakomachi et al. | Nov 2010 | A1 |
20100310139 | Kimura | Dec 2010 | A1 |
20110017902 | Hing et al. | Jan 2011 | A1 |
20110037847 | Soenksen | Feb 2011 | A1 |
20110038523 | Boardman | Feb 2011 | A1 |
20110043663 | Tsuchiya | Feb 2011 | A1 |
20110064296 | Dixon | Mar 2011 | A1 |
20110074817 | Shinichi et al. | Mar 2011 | A1 |
20110102571 | Yoneyama | May 2011 | A1 |
20110109735 | Otsuka | May 2011 | A1 |
20110145755 | Bacus et al. | Jun 2011 | A1 |
20110181622 | Bacus et al. | Jul 2011 | A1 |
20110221881 | Shirota et al. | Sep 2011 | A1 |
20110316993 | Chen et al. | Dec 2011 | A1 |
20110316999 | Yoneyama et al. | Dec 2011 | A1 |
20120002043 | Nitta | Jan 2012 | A1 |
20120002892 | Eichhorn et al. | Jan 2012 | A1 |
20120038979 | Hing et al. | Feb 2012 | A1 |
20120044342 | Hing et al. | Feb 2012 | A1 |
20120069171 | Kodaira et al. | Mar 2012 | A1 |
20120069344 | Liu | Mar 2012 | A1 |
20120076391 | Dietz et al. | Mar 2012 | A1 |
20120076411 | Dietz et al. | Mar 2012 | A1 |
20120076436 | Dietz et al. | Mar 2012 | A1 |
20120081536 | Kuppig et al. | Apr 2012 | A1 |
20120114204 | Olson et al. | May 2012 | A1 |
20120120225 | Maddison | May 2012 | A1 |
20120127297 | Baxi et al. | May 2012 | A1 |
20120163680 | Lefebvre | Jun 2012 | A1 |
20120208184 | Ragan | Aug 2012 | A1 |
20120281931 | Eichhorn et al. | Nov 2012 | A1 |
20130003172 | Widzgowski et al. | Jan 2013 | A1 |
20130076886 | Ikeno et al. | Mar 2013 | A1 |
20130140459 | Galloway | Jun 2013 | A1 |
20130162802 | Soenksen | Jun 2013 | A1 |
20130164781 | Lefebvre | Jun 2013 | A1 |
20130182922 | Kil | Jul 2013 | A1 |
20130216451 | Hayworth et al. | Aug 2013 | A1 |
20130250090 | Morimoto | Sep 2013 | A1 |
20140030757 | Schiffenbauer | Jan 2014 | A1 |
20140049632 | Hemmer | Feb 2014 | A1 |
20140051158 | Nakajima et al. | Feb 2014 | A1 |
20140085453 | Yamane | Mar 2014 | A1 |
20140086463 | Meetz et al. | Mar 2014 | A1 |
20140087411 | Chow et al. | Mar 2014 | A1 |
20140098376 | Hashimshony et al. | Apr 2014 | A1 |
20140112560 | Soenksen | Apr 2014 | A1 |
20140118528 | Wolff et al. | May 2014 | A1 |
20140130613 | Adiga et al. | May 2014 | A1 |
20140137715 | Sneyders et al. | May 2014 | A1 |
20140273086 | Lefebvre | Sep 2014 | A1 |
20150015578 | Eichhorn et al. | Jan 2015 | A1 |
20150153552 | Loney et al. | Jun 2015 | A1 |
20150153555 | Loney et al. | Jun 2015 | A1 |
20150153556 | Loney et al. | Jun 2015 | A1 |
20150177504 | Bickert et al. | Jun 2015 | A1 |
20160216504 | Hing | Jul 2016 | A1 |
Number | Date | Country |
---|---|---|
2504245 | Nov 2006 | CA |
102782557 | Nov 2012 | CN |
102841079 | Dec 2012 | CN |
102009012293 | Mar 2009 | DE |
1447699 | Aug 2004 | EP |
2051051 | Apr 2009 | EP |
2110696 | Oct 2009 | EP |
2169379 | Mar 2010 | EP |
2620537 | Mar 1989 | FR |
03092 | Nov 1906 | GB |
59071018 | Apr 1984 | JP |
61248168 | Nov 1986 | JP |
S63206793 | Aug 1988 | JP |
09080138 | Mar 1997 | JP |
09133856 | May 1997 | JP |
9161068 | Jun 1997 | JP |
09218354 | Aug 1997 | JP |
2001281553 | Oct 2001 | JP |
2002031513 | Jan 2002 | JP |
200284554 | Mar 2002 | JP |
2006003543 | Jan 2006 | JP |
2006343595 | Dec 2006 | JP |
2009192824 | Feb 2008 | JP |
2008262100 | Oct 2008 | JP |
2009-036969 | Feb 2009 | JP |
201201392 | Jan 2012 | TW |
WO-0154052 | Jul 2001 | WO |
WO-2005015120 | Feb 2005 | WO |
WO-2008118886 | Oct 2008 | WO |
WO-2008141009 | Nov 2008 | WO |
WO-2010105015 | Sep 2010 | WO |
WO-2012024627 | Feb 2012 | WO |
Entry |
---|
Sakura Finetek U.S.A., Inc., “Non final office action”, U.S. Appl. No. 14/138,740, (dated Jul. 1, 2016). |
Haruhisa, S., et al., “Application of telepathology for improvement of therapeutic effects and economic efficiency, and development of new equipment for it”, Science Links Japan; http://sciencelinks.jp/j-east/article/200516/000020051605A0431066.php, Journal Code: N20051113, (2005), 166-125. |
Sakura Finetek U.S.A., “Extended Search Report”, European Application No. 14198636, (dated Sep. 30, 2015). |
Sakura Finetek U.S.A. Inc., Eurasian office action dated Jan. 21, 2014 for EA201001786. |
Sakura Finetek U.S.A., Inc., Final office action for Japanese App No. 2011-553548, (dated Sep. 2, 2014). |
Sakura Finetek U.S.A., Inc., Examination Report for Australian App No. 2011291517, (dated Jun. 19, 2014). |
Sakura Finetek U.S.A., Inc., PCT Search Report and Written Opinion dated Sep. 22, 2014 for International Application No. PCT/US2014/034477, 12 pages. |
Sakura Finetek U.S.A., Inc., Chinese Final Office Action for CN Application No. 201080017649.4, (dated Jul. 3, 2014). |
Sakura Finetek U.S.A., Inc., Non-Final Office Action for U.S. Appl. No. 13/212,955, (dated Oct. 31, 2014). |
Sakura Finetek U.S.A., Inc., European second office action for EP Appln. No. 10719379.9, (dated Nov. 6, 2014). |
Sakura Finetek U.S.A., Inc., et al., Chinese Office Action for CN 201180047558.X, (dated Nov. 15, 2014). |
Sakura Finetek U.S.A., Inc., et al., Australian Examination Report for App No. 2011291517, (dated Feb. 2, 2015). |
Sakura Finetek U.S.A., Inc., et al., European Office Action for EP App. No. 11749675.2, (dated Jan. 30, 2015). |
Sakura Finetek U.S.A., Inc., Final office action for U.S. Appl. No. 13/212,955, (dated Apr. 15, 2015). |
Sakura Finetek U.S.A., Inc., Non final office action for U.S. Appl. No. 13/255,827, (dated Apr. 8, 2015). |
Sakura Finetek U.S.A., Inc., Partial European search report for Application No. 14198636.4, (dated Apr. 28, 2015). |
Sakura Finetek U.S.A., Inc., Extended Search Report for EP15154503, (dated Jun. 19, 2015). |
Sakura Finetek U.S.A., Inc., Second office action dated Jul. 6, 2015 for Chinese Appln. No. 201180047558.X. |
Sakura Finetek U.S.A., Inc., International search report and written opinion for PCT/US2014/034477, (dated Sep. 22, 2014). |
Sakura Finetek U.S.A., Inc., International preliminary Report on Patentability for PCT/US2014/034477, (dated Oct. 29, 2015). |
Sakura Finetek U.S.A., Inc., Final office action, U.S. Appl. No. 13/255,827, (dated Oct. 20, 2015). |
Sakura Finetek U.S.A., Inc., Notice of rejection for Japanese Application No. 2013-525005, (dated Feb. 9, 2016). |
Sakura Finetek U.S.A., Inc., et al., Canadian Examiner's Report for CA 2,755,164, (dated Dec. 7, 2012). |
Sakura Finetek U.S.A., Inc., et al., International Preliminary Report on Patentability for PCT/US2011/048488, (dated Mar. 7, 2013). |
Sakura Finetek U.S.A., Inc., Office action for EPO Application No. 10719379.9, (dated Jul. 30, 2013). |
Sakura Finetek U.S.A., Inc., Australian Office Action for Australian App No. 2010222633, (dated Nov. 26, 2013). |
Sakura Finetek U.S.A., Inc., Japanese Office Action for JP App No. P2011-553548, (dated Dec. 10, 2013). |
Sakura Finetek U.S.A., Inc., Australian Examination Report for AU 2011291517, (dated Dec. 24, 2013). |
Sakura Finetek U.S.A., Inc., Chinese second office action for CN201080017649.4, (dated Dec. 27, 2013). |
Sakura Finetek, U.S.A., Inc., PCT International Search Report and Written Opinion for PCT/US2011/048488, (dated Oct. 13, 2011). |
Sensovation AG, PCT International Preliminary Report on Patentability for Int'l Application No. PCT/IB2010/000518, (dated Sep. 20, 2011), 7 pages. |
Sakura Finetek USA, Inc., “Office Action”, JP Application No. 2016-507909, (dated Sep. 15, 2016). |
Sakura Finetek U.S.A., Inc., “EP Supplementary Search Report”, EP Application No. 14784707.3, (dated Oct. 4, 2016). |
Sakura Finetek U.S.A., Inc., “Examination Report”, CA Application No. 2908058, (dated Nov. 16, 2016). |
Sakura Finetek U.S.A., Inc., “Extended European Search Report”, EP Application No. 15194968.2, (dated Mar. 18, 2016). |
Sakura Finetek U.S.A., Inc. “Final Rejection”, JP Application No. P2013-525005, (dated Dec. 27, 2016). |
Sakura Finetek U.S.A., Inc. “First Office Action with search report”, CN Application No. 2014800218372, (dated Nov. 30, 2016). |
Sakura Finetek U.S.A., Inc. “Fourth Office Action”, CN Application No. 201180047558X, (dated Oct. 24, 2016). |
Sakura Finetek U.S.A., Inc. “Non-Final Office Action”, U.S. Appl. No. 14/779,550, (dated Jan. 19, 2017). |
Sakura Finetek U.S.A., Inc. “Patent Examination Report No. 1”, AU Application No. 201453889, (dated May 18, 2016). |
Sakura Finetek U.S.A., Inc., “Third Office Action”, CN Application No. 201180047558X, (dated Apr. 1, 2016). |
Sakura Finetek U.S.A., Inc., “Final Office Action”, U.S. Appl. No. 14/138,740, (dated Jan. 26, 2017). |
Sakura Finetek U.S.A., Inc. “First Office Action”, EP Application No. 15194968.2, (dated Mar. 10, 2017). |
Sakura Finetek USA Inc., “Office Action”, EP Application No. 15154503.5, (dated Feb. 28, 2017). |
Requirement for Restriction/Election dated Nov. 3, 2014 for U.S. Appl. No. 13/255,827. |
Office Action received for European Patent Application No. 14784707.3, dated Mar. 5, 2018. |
Office Action received for Chinese Patent Application No. 201480021837.2, dated Apr. 8, 2018. |
Office Action received for Chinese Patent Application No. 201410415253.5, dated Feb. 9, 2018. |
Notice of Allowance received for U.S. Appl. No. 14/138,740, dated Feb. 13, 2018. |
Notice of Allowance and Fees Due (PTOL-85) dated Dec 16, 2015 for U.S. Appl. No. 13/255,827. |
Non-Final Office Action received for U.S. Appl. No. 14/779,550, dated Dec. 22, 2017. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/IB2010/000518, dated Jul. 16, 2010, 16 pages (8 pages of English Translation and 8 pages of Original Document). |
Extended European Search Report and Written Opinion received for EP Patent Application No. 17202516, dated Apr. 24, 2018. |
Sakura Finetek U.S.A., Inc., “Second Office Action”, CN Application No. 2014800218372 (dated Jan. 26, 2017). |
Forrest, K. , et al., “Tunneling calculations for GaAs—AlxGa(1-x)As graded band-gap sawtooth superlattices”, IEEE Journal of Quantum Electronics, vol. 26, No. 6, (Jun. 1990), 1067-1074. |
Sakura Finetek U.S.A., “Non-Final Office Action”, for U.S. Appl. No. 13/212,955, (dated May 3, 2016). |
Sakura Finetek U.S.A. Inc. “Examiner's Report”, CA Application No. 2808105, (dated Jun. 12, 2017). |
Sakura Finetek U.S.A. Inc. “Examiner's Report”, CA Application No. 2908058, (dated Jul. 24, 2017). |
Sakura Finetek U.S.A. Inc. “Final Office Action”, JP Application No. 2016-507909, (dated Apr. 28, 2017). |
Sakura Finetek U.S.A. Inc. “Final Office Action”, U.S. Appl. No. 14/779,550, (dated May 24, 2017). |
Sakura Finetek U.S.A. Inc. “Non final office action”, U.S. Appl. No. 14/138,740, (dated Jun. 20, 2017). |
Sakura Finetek U.S.A., Inc. “Non final office action”, U.S. Appl. No. 13/212,955, (dated Aug. 9, 2017). |
Number | Date | Country | |
---|---|---|---|
20160216504 A1 | Jul 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13255827 | US | |
Child | 15092285 | US |