1. Field of the Invention
The present invention relates to the field of ocular imaging, and, more particularly, to devices for imaging the ocular fundus.
2. Description of Related Art
The term ocular fundus refers to the inside back surface of the eye containing the retina, blood vessels, nerve fibers, and other structures. The appearance of the fundus is affected by a wide variety of pathologies, both ocular and systemic, such as glaucoma, macular degeneration, diabetes, and many others. For these reasons, most routine physical examinations and virtually all ophthalmic examinations include careful examination of the ocular fundus.
Routine examination of the ocular fundus (hereinafter referred to as fundus) is performed using an ophthalmoscope, which is a small, hand-held device that shines light through the patient's pupil to illuminate the fundus. The light reflected from the patient's fundus enters the examiner's eye, properly focused, so that the examiner can see the fundus structures.
If a hard copy of the fundus view is desired, a device called a fundus camera can be used. However, to use existing fundus cameras successfully is a very difficult undertaking. The operator must (1) position the fundus camera at the correct distance from the eye, (2) position it precisely in the vertical and horizontal directions in such a way that the light properly enters the pupil of the patient's eye, (3) refine the horizontal and vertical adjustments so that the light reflected from the front surface of the eye, the cornea, does not enter the camera, (4) position a visual target for the patient to look at so that the desired region of the fundus will be imaged, and (5) focus the fundus image. All these operations must be performed on an eye that is often moving. Therefore, the use of existing fundus cameras requires a significant amount of training and skill; even the most skilled operators often collect a large number of images of a single eye in order to select one that is of good quality.
In existing fundus cameras, alignment and focusing are performed under visual control by the operator. This usually requires that the patient's eye be brightly illuminated. Such illumination would normally cause the pupils to constrict to a size too small to obtain good images. Therefore, most existing fundus cameras require that the patient's pupil be dilated by drugs.
U.S. Pat. No. 4,715,703 describes an invention made by one of the present inventors and discloses apparatus for analyzing the ocular fundus. The disclosure in this patent is incorporated herein by reference.
The present invention is in the nature of a fundus camera which automatically and quickly performs all the aligning and focusing functions. As a result, any unskilled person can learn to obtain high quality images after only a few minutes of training and the entire imaging procedure requires far less time than existing fundus cameras. Moreover, all of the automatic aligning and focusing procedures are performed using barely visible infrared illumination. With such illumination, the patient's pupils do not constrict and for all but patients with unusually small natural pupils, no artificial dilation is required. The fundus images can be obtained under infrared illumination and are acceptable for many purposes so that the patient need not be subjected to the extremely bright flashes required for existing fundus cameras. To obtain standard color images using the present invention, it is sometimes necessary to illuminate the eye with flashes of visible light. However, such images can be obtained in a time appreciably shorter than the reaction time of the pupil, so that the pupil constriction that results from the visible flash does not interfere with image collection. Unlike existing fundus cameras, the present invention provides for automatic selection of arbitrary wavelengths of the illuminating light. This facility has two significant advantages. First, it is possible to select illuminating wavelengths that enhance the visibility of certain fundus features. For example, certain near-infrared wavelengths render the early stages of macular degeneration more visible than under white illumination. Second, by careful selection of two or more wavelengths in the near infrared, it is possible to obtain a set of images which, when properly processed, generate a full color fundus image that reveals sub-retinal fundus features. Thus, it is possible to obtain acceptable color fundus images without subjecting the patient to bright flashes.
It is therefore a primary object of the present invention to provide a fundus imager which automatically positions fundus illuminating radiation to enter the pupil while preventing reflection from the cornea from obscuring the fundus image, irrespective of movement of the eye or the patient's head within the head restraint.
Another object of the present invention is to provide automatic focusing of the fundus image based upon the image itself.
Yet another object of the present invention is to provide automatic positioning of one or a sequence of fixation targets to select the sections(s) of the fundus to be imaged.
Still another object of the present invention is to provide a fundus imager for collecting a set of images that can be arranged in a montage to provide a very wide angle fluids image facilitated by the capability of the fundus imager to automatically align and focus the images.
A further object of the present invention is to provide automatic setting of video levels in a fundus imager to use the full range of levels available.
Yet another object of the present invention is to permit aligning and focusing a fundus imager under infrared illumination to permit imaging without drug induced dilation of the pupil.
A yet further object of the present invention is to provide for automatic selection of illumination wavelength.
A yet further object of the present invention is to provide a colored image from a fundus imager by sequential imaging and registration of images.
A yet further object of the present invention is to provide for automatic acquisition by a fundus imager of a stereo image pair having a known stereo base.
A yet further object of the present invention is to provide a head positioning frame for use with a fundus imager.
A yet further object of the present invention is to accommodate for astigmatism and/or extreme near and far sightedness by placing a lens of the patient's glasses in the path of illumination of the fundus imager.
A yet further object of the present invention is to provide a method for automatically positioning the illuminating radiation of a fundus imager to prevent corneal reflections from obscuring the fundus image obtained.
A yet further object of the present invention is to provide a method for automatic focusing in a fundus imager.
These and other objects of the present invention will become apparent to those skilled in the art as the description thereof proceeds.
The present invention will be described with greater specificity and clarity with reference to the following drawings, in which:
Referring to
Light diffusely reflected from fundus 14 emerges from pupil P and half of it is transmitted through beam splitter BS2 toward collimating lens L4, which lens is at its focal distance from the pupil. If the patient's eye is focused at infinity, the light reflected from each point on fundus 14 will be collimated as it is incident on lens L4. Therefore, the 50% of the light that passes through beam splitter BS2 will form an aerial image of the fluids in the focal plane of lens L4, which focal plane is represented by a dashed line identified as FI (Fundus Image). The light passes through lens L6, which lens is at its focal distance from fundus image FI. Thus, lens L6 will collimate light from each point on the fundus. Further, because the light considered as originating in the plane of pupil P is collimated by lens L4, lens L6 will form an image of the pupil in its back focal plane, which is coincident with the location of second aperture A2. Light passing through second aperture A2 is incident on lens L7, which lens will then form an image of the fundus in its back focal plane which is coincident with an image sensor or video sensor C1. The video image produced by video sensor C1 represents an image of the fluids.
An infrared light emitting diode (LED), representatively shown and identified by reference numeral 21, diffusely illuminates the region of the front of the eye.
If the eye is not focused at infinity, the aerial fundus image FI will be moved away from the back focal plane of lens L4. For example, if the eye is nearsighted, the aerial fundus image will move toward lens L4. Such movement would cause the fundus image to be defocused on video sensor C1. Focusing the image under these conditions is accomplished as follows. Lens L6, aperture A2, lens L7, and video sensor C1 are mechanically connected to one another by a focusing assembly labeled FA; that is, these elements are fixedly positioned relative to one another and move as a unit upon movement of the focusing assembly. A unit identified by reference numeral 23 provides rectilinear movement of the focusing assembly on demand.
The entire optical system (10) discussed above and illustrated in
To operate optical system 10, a computer control system 30 is required, which is representatively illustrated in
In operation, an operator enters patient information data into the computer control system using the keyboard and also enters the location or set of locations on the fluids that is/are to be imaged. It may be noted that the field of view of the optical system is preferably 30° in diameter while the ocular fundus is about 200° in diameter. To image various regions of the 200° fundus, the eye can be rotated with respect to the optical system; such rotation is achieved by having the patient look from one reference point to another. After entry of the raw data, the patient's head is juxtaposed with a head positioning apparatus to locate the eye in approximate alignment with respect to the optical axis. An image of the front of the eye produced by a video sensor or camera CAM, (
To achieve proper alignment of the optical system with the eye requires that the light from light source S enter the pupil. Initially, the angular position of beam splitter BS1 is set so that the image of aperture A1 lies on the optical axis of the system. It is noted that the image of aperture A1 contains the light used to illuminate the ftndus. If the operator has initially centered the pupil image even crudely, light from light source S will enter the pupil. About two percent (2%) of the light incident on the eye will be reflected from the corneal surface and if this light reaches video sensor C1, it would seriously obscure the image of the fundus. Therefore, the optical system includes the following elements for preventing corneal reflection from reaching video sensor C1.
If the light rays forming the image of aperture A1 were aligned so that the central ray were perpendicular to the corneal surface, then many of the rays in the corneal reflection would pass backward along the incident light paths. As shown in
However, the corneal surface is steeply curved and if the central ray of the incident light is moved far enough away from the perpendicular to the cornea, as shown in
Initially, the angle of beam splitter BS1 (
If a fundus image were to be collected under these conditions, the reflection from the cornea would severely spoil the fundus image. To prevent this, after alignment is achieved, the angular position of beam splitter BS1 is changed by motor 24 and linkage 26 to move the image of A1 to the bottom of the pupil. If the pupil is about 4mm or larger in diameter, this will deflect the corneal reflection sufficiently that it will not enter lens L4. To do this, the diameter of the pupil must be known. This diameter is determined by performing the method described below for automatic alignment in the vertical and horizontal directions.
If the pupil is relatively small, a further technique is employed to allow greater displacement of the illumination away from the center of the pupil. This is accomplished by automatically changing the aiming point of the vertical alignment servo so that the image of the pupil moves downward with respect to the optical axis by an amount that is a fixed proportion of the pupil diameter. Thus, the regions of the pupil through which the images are collected moves toward the top of the pupil and the image of A1 has more room to move downward. This description refers to movement of the image of A1 to the bottom of the pupil. The same effect can be achieved by moving the bar to the top of the pupil and moving the servo aiming point so that the pupil image moves upward. In general, if the patient is looking downward, moving the image of A1 downward is more effective and if the patient is looking upward, it is more effective to move the image of A1 to the top of the pupil.
A method for tracking the pupil and positioning the image of aperture A1 on the pupil of the eye will be described hereafter with reference to
About half of the light reflected from fundus 14 is reflected from beam splitter BS2 through lens L3, and about 10% of that light passes through beam splitter BS1. Some of that light passes through a lens L8 and falls on a small camera CAM1 on which an image of the pupil is formed. Others of those rays pass through another lens L9 (shown in dashed lines) and to camera CAM2 (shown in dashed lines) and forms another image of the pupil. These lenses and cameras are placed one above and the other below the plane of the paper in
The output of one of these cameras is used to position optical system 10 in the x and y axis, as described in further detail below. To position the optics at the correct distance from the eye (the z direction), the images from cameras CAM1 and CAM2 are compared in software. When the pupil is at the correct distance from the optics, that is, when the pupil is in the focal plane of lens L3 (and therefore, because of the mechanical arrangement, in the focal plane of lens L4), then the two pupil images will lie in a particular relationship to each other. If the optical system were perfectly aligned and centered, the two images would each be perfectly centered in the fields of view of their respective cameras CAM1 and CAM2. Then, considering the fields of view of the two cameras as superimposed, if the pupil image from the left camera is to the left of the image from the right camera, then the optics need to be moved closer to the patient and vice versa.
If the optical system is not perfectly aligned, there will be a particular relative positioning between the two images that occurs when the pupil is in the correct position, and the software drives optical system 10 in the z direction until that relative position is attained. (That relative position is determined during the procedure for optically aligning the entire system.)
A method for finding the center and the edges of the pupil image will now be described. It involves finding the edges of the pupil image on each video line that intersects the edges and then computing the most likely position of the center and of the edges of the actual pupil. The image from camera CAM1 is read out, as is the standard video practice, by reading the values of the various points along a horizontal line and then the values along the next horizontal line, etc. (neglecting the detail of interlacing). If a given video horizontal line intercepts the image of the pupil, the video level will abruptly rise from the dark background level to the brighter level of the pupil. To locate this transition and find the position of each edge, it is necessary to define the values of the background and of the pupil. To do this, a histogram of pixel values is formed during the first few video frames. It will contain a large peak with values near zero, representing dark background pixels, and additional peaks at higher values that represent the pupil and various reflections to be discussed below. A typical histogram is illustrated in
The “background level” is defined as the level just below the first minimum. Specifically, the histogram is first smoothed using a running block filter. That is, for a position on the horizontal axis the vertical value on the curve is replaced by the average of the vertical value and its adjoining values. This computation is performed in steps along the horizontal axis (video level) until there are ten consecutive values for which the vertical axis increases. The “background value” is then defined as the lowest of these ten values. An “edge point” on each horizontal line is defined as the horizontal location for which the video level changes from equal to or below the “background value” to above that value or changes from above that value to equal or below that value. As the video scan proceeds, the location of each point is saved. Thus, at the end of each video frame, a set of point locations is stored in the computer memory (see
If the pupil image consists solely of a bright disk on a dark background, the above described procedure would essentially always be successful in finding a close approximation to the actual pupil edges. However, for real pupil images the procedure is confounded by two sources of reflections. First, light reflected from the cornea; if this light reaches cameras CAM1 and CAM2, it will form a bright spot superimposed on the pupil image. If that spot were entirely within the margins of the pupil, it would not interfere with the process described above. However, if it falls on the edge of the pupil image, as it may when a patient is looking at an angle to the optical axis of the optical system, then it will appear as a bulge on the edge of the pupil, as illustrated in
One such special procedure will described below. The edge points are collected as described above. There will typically be several hundred such points. An ellipse is then found (determined) that best fits the set of edge points. The pupil of the human eye is usually circular, but if it is viewed from an angle, as it will be if the patient is looking at a point other than on the optical axis, then the image of the pupil will approximate an ellipse. So long as the reflections from the cornea and iris do not overlap a major part of the pupil edge (and so long as the pupil is not of grossly abnormal shape), such a procedure yields a good estimate of the locations of the actual pupil center and the edge.
One method for finding the best fitting ellipse will be described. Assuming that two hundred points have been labeled edge points by the above procedure, each of such points has a horizontal (x) and a vertical (y) location. Assume that these two hundred points, that is pairs of values (x,y), are in a consecutive list. Five points are selected at random from the list, requiring only that each selected point be separated from the next selected point by at least ten positions on the list. This process will then yield the locations of five putative edge points that are some distances apart on the pupil. These five pairs of values are substituted into the equation for an ellipse and solved for the five ellipse parameters. One form of equation for an ellipse is:
c1*x^2+c2*xy+c3*y^2+c4*x+c5*y=1
Substitute the five putative edge points as the pairs (x,y) of values in that equation. Invert the matrix to find the values for c1 through c5. Then the angle that the ellipse makes with the xy axis is:
θ=½*arc cot((c1−c3)/c2)
Then if u=x*cos θ+y*sin θ and v=−x*sin θ+y*cos θ, then d1*u^2+d3*v^2+d4*u+d5*v=1
Where d1=c1*cos^2+c2*cos θ*sin θ+c3*sin^2 θ
d3=c1*sin ^2θ−c2*cos θ*sin θ+c3*cos ^2 θ
d4=c4*cos θ+c5*sin θ
d5=−c4*sin θ+c5*cos θ
The center of the ellipse has u coordinate u=−d4/(s*d1) and v coordinate V=−d5/(2*d3) so the center of the ellipse has the x coordinate
x=u*cos θ−v*sin θ
and the y coordinate
y=u*sin θ+v*cos θ
If R=1+d4^2/2d1+d5^2/2d3 then the semiaxes of the ellipse have lengths
Square root (R/d1) and square root (R/d3)
This entire procedure is repeated, say, 100 times for 100 different sets of putative points yielding 100 different estimates of the x,y location of the center. The best fitting ellipse is the one for which the center is closest to the median x and y values of the set of 100.
The resulting deviations between the horizontal and the vertical locations of the center of the chosen ellipse and the optical axis of the optical system can be used directly as error signals to drive the positioning servos associated with assembly 20 (
An automatic method for focusing the ftndus image will be described with reference to
To explain more clearly the direction of displacement of the focusing assembly (FA) to achieve correct focus, joint reference will be made to
Thereby, automatic focusing is achieved by finding the displacement of one image of a pair of images that is required to bring the two images into registry and then moving the focusing assembly in accordance with such result. The required displacement can be found by computing a cross-correlation function between the two images. This is a mathematical computation that, in effect, lays one image on top of the other, measures how well the two images correspond, then shifts one image horizontally a little with respect to the other, measures the correspondence again, shifts the one image a little more and measures the correspondence again and repeats these steps for a large number of relative positions of two images. Finally, the shift that produces the best correspondence is computed.
Even when a patient is trying to hold his/her eye steady, the eye is always moving and as a result the fundus image is continually shifting across the sensing surface of video sensor C1. Exposure durations for individual images are chosen to be short enough (about 15 milliseconds) so that this motion does not cause significant blur. Nevertheless, the time interval between members of pairs of images taken during the automatic focusing procedure may be long enough to allow movement between the images that would confound the focusing algorithm. Therefore, the actual procedure requires that a number of pairs of images be collected and, only when two members of a pair agree will they be used as the measure of focus error.
The focusing method described above requires that a number of image pairs must be collected in order to find a set that is relatively unspoiled by eye movements. It would be preferable to obtain the two images (one through the left and the other through the right side of the pupil) simultaneously, so that eye movements would not affect the result. A method for simultaneous image collection is described below.
If subassembly 74 is moved farther upward by linear actuator 72, the image of the patient's pupil will fall on double mirrors 78,80. In a preferred embodiment, the double mirrors are formed by a right angle prism 82 with the two faces (78, 80) that form a right angle being silvered. When the mirrors are in place, light from the fundus that exits through the left side of the pupil is deflected through a lens L8 and onto a video sensor or camera, CAM3, and light from the right side of the pupil is deflected through another lens L9 and to another video sensor or camera, CAM4. When the fundus image is in proper focus, light from the fundus will be collimated when it arrives at each of lenses L8, L9 and, because cameras CAM3 and CAM4 lie in the focal planes of those lenses, an image of the fundus will be formed on each camera, one of the images being formed with light passing through the left side of the pupil and the other is formed with light passing through the right side of the pupil. The two cameras are synchronized so that the two images are captured simultaneously.
If the fundus is in correct focus, and if the two lenses (L8, L9) and two cameras (CAM3, CAM4) are perfectly positioned on the optical axes, then the two images will occupy identical positions on the two cameras. If the image is out of focus, the two images will move in opposite directions with respect to their respective cameras. Thus, computing a cross-correlation function on the two images provides the information necessary to move the focus assembly FA to achieve correct focus, (by the same principle as explained with reference to
If the prisms were removed and the fundus were in good focus, the images through the top and bottom holes would be precisely superimposed, but if the image is out of focus, one image would move up and the other down, in proportion to the degree of defocus. If that displacement could be measured, it would serve as the error signal to perform automatic focusing of the fundus image. However, because the two images would strongly overlap, there is no simple way to distinguish one image from the other. The prisms serve the function of moving the two images so that they do not overlap, as follows.
Upper wedge prism 96 deflects all of the rays 100 passing through it upwards, 15 degrees in the preferred embodiment. Therefore, the fundus image formed through the top of the pupil will move upwards on the camera. This will cause the top half of the image to fall above the sensor surface and be lost. However, the bottom half of the image will fall on the top half of the sensor and can be captured since the field of view is 30 degrees.
Right angle prism 98 acting as a dove prism, performs two different functions. First, because hypotenuse side 102 of the prism is not horizontal but is tilted downward, the image (ray 104) passing through it will be deflected downward by 15 degrees. If this were its only action, it would cause the upper half of the fundus image to fall on the lower half of the sensor. Therefore the two images, being different parts of the fundus, could not be compared. However, its dove prism action causes the image passing through it to be rotated through 180 degrees (as depicted by ray 104), so that the bottom half of the fundus image falls on the bottom half of the sensor. That allows relative positions of the two images (and thus the focus error) to be computed. In this way, the focus error can be determined from (half) images collected simultaneously.
Of course, if the two lower holes (A7, A8) were side by side instead of one above the other, and the prisms were rotated accordingly, the two half images would be positioned one on the left and the other on the right half of the sensor, and the computation for focus error could again be accomplished.
Selection of the fluids region to be imaged will now be described. Adjacent beam splitter BS1 illustrated in
In addition to the LED's in the plane labeled FIX, other visible LED's, such as LED 28 shown in
When the operator sets up the instrument prior to collecting images, he/she selects the region or set of regions of the fundus to be imaged. If just one region is to be imaged, the appropriate LED will be lighted. If a series of locations is to be imaged, the computer (see
After the image of aperture A1 has been located to exclude the corneal reflection and focusing has been achieved, another pair of images is collected with aperture A2 in each of two positions. This pair of images constitutes a stereo pair of images with a known stereo base, which base is the distance through which aperture A2 has moved.
During the alignment and focusing procedures previously described, filter F (see
During the interval between images collected in different wavelengths, it is possible that the eye, and thus the fundus image, will move significantly. If such movement occurs, then the variously colored images would not be in registry when displayed. To prevent this occurrence the images are automatically registered before being displayed by performing a two-dimensional cross-correlation and then shifting the images in accordance with the result.
Essentially all standard ophthalmic instruments position a patient's head using a combination of a chin rest and a forehead rest. Other devices, such as a combination of chin rest and support for the bridge of the nose would be suitable. Typically, only a bridge of the nose rest is used in the present device. These types of devices are representatively shown in
The motion of focusing assembly FA (see
Color images are always composed of what can be considered as separate images taken in each of a number of wavelength bands. In the present invention, the bands are. chosen by selecting filters (such as filters F, F1 shown in
It is standard procedure in fimdus imaging to display black and white images taken in green and in red light to reveal these different features and also to combine those images to form a single color image. Through software and manipulation of a mouse a technique has been implemented that presents these images in an interesting and useful way as set forth below.
The computer screen 60 (
When a fundus image is displayed, a small image of the pupil that was taken at the same time as the fundus image is also displayed. The pupil image has drawn upon it indications of which parts of the pupil were used to collect the images (that is, which parts of the pupil were imaged on the two positions of holes A3, A4 or A5, A6 in
If the eye being imaged has a cataract that lies in the relevant optical path, the fundus image can be spoiled. The way in which the pupil image is formed in the cameras, (
While the invention has been described with reference to several particular embodiments thereof, those skilled in the art will be able to make the various modifications to the described embodiments of the invention without departing from the true spirit and scope of the invention. It is intended that all combinations of elements and steps which preform substantially the same function in substantially the same way to achieve the same result are within the scope of the invention.
The present application is a continuation-in-part application of an application entitled “Ocular Fundus Auto Imager”, filed Dec. 16, 2002, assigned Ser. No. 10/311,492, now U.S. Pat. Ser. No. 7,025,459 which is a national phase application based on a Patent Cooperation Treaty application entitled “Ocular Fundus Auto Imager”, filed Jul. 6, 2001, assigned Ser. No. PCT/US01/21410, which is a continuation of and claims priority to a United States application entitled “Ocular Fundus Auto Imager”, filed Aug. 25, 2000, assigned Ser. No. 09/649,462, now U.S. Pat. No. 6,296,358 and which application claims priority to the subject matter disclosed in a provisional application entitled “FUNDUS AUTO IMAGER”, filed Jul. 17, 2000 and assigned Ser. No. 60/218,757 all of which applications are directed to an invention made by the present inventors and assigned to the present assignee.
Number | Name | Date | Kind |
---|---|---|---|
3915564 | Urban | Oct 1975 | A |
4019813 | Cornsweet et al. | Apr 1977 | A |
4187014 | Kato et al. | Feb 1980 | A |
4281926 | Cornsweet | Aug 1981 | A |
4283124 | Matsumura | Aug 1981 | A |
4329049 | Rigg et al. | May 1982 | A |
4405215 | Sano et al. | Sep 1983 | A |
4436388 | Takahashi et al. | Mar 1984 | A |
4469416 | Isono | Sep 1984 | A |
4526450 | Suzuki et al. | Jul 1985 | A |
4544248 | Nunokawa | Oct 1985 | A |
4579430 | Bille | Apr 1986 | A |
4580885 | Takahashi | Apr 1986 | A |
4591249 | Takahashi et al. | May 1986 | A |
4666268 | Ito | May 1987 | A |
4673264 | Takahashi | Jun 1987 | A |
4690525 | Kobayashi et al. | Sep 1987 | A |
4715703 | Cornsweet et al. | Dec 1987 | A |
4732466 | Humphrey | Mar 1988 | A |
4773749 | Ohtomo et al. | Sep 1988 | A |
4834526 | Nunokawa | May 1989 | A |
4902121 | Shinn | Feb 1990 | A |
4989023 | Sakurai et al. | Jan 1991 | A |
4993827 | Benedek et al. | Feb 1991 | A |
5064286 | Ai et al. | Nov 1991 | A |
5072731 | Taratuta et al. | Dec 1991 | A |
5090799 | Makino et al. | Feb 1992 | A |
5114222 | Cornsweet | May 1992 | A |
5125730 | Taylor et al. | Jun 1992 | A |
5129400 | Makino et al. | Jul 1992 | A |
5196872 | Beesmer et al. | Mar 1993 | A |
5202708 | Sasaki et al. | Apr 1993 | A |
5210554 | Cornsweet et al. | May 1993 | A |
5233517 | Jindra | Aug 1993 | A |
5349419 | Taguchi et al. | Sep 1994 | A |
5371557 | Nanjho et al. | Dec 1994 | A |
5382988 | Nanjo | Jan 1995 | A |
5410376 | Cornsweet et al. | Apr 1995 | A |
5422690 | Rothberg et al. | Jun 1995 | A |
5504543 | Ueno | Apr 1996 | A |
5508760 | Kobayashi et al. | Apr 1996 | A |
5539487 | Taguchi et al. | Jul 1996 | A |
5540226 | Thurston et al. | Jul 1996 | A |
5542422 | Hayden | Aug 1996 | A |
5572266 | Ohtsuka | Nov 1996 | A |
5579063 | Magnante et al. | Nov 1996 | A |
5697006 | Taguchi et al. | Dec 1997 | A |
5710630 | Essenpreis et al. | Jan 1998 | A |
5713047 | Kohayakawa | Jan 1998 | A |
5764341 | Fujieda et al. | Jun 1998 | A |
5844658 | Kishida et al. | Dec 1998 | A |
5896198 | Chou et al. | Apr 1999 | A |
5908394 | Kandel et al. | Jun 1999 | A |
5912720 | Berger et al. | Jun 1999 | A |
5914771 | Biber | Jun 1999 | A |
5943116 | Zeimer | Aug 1999 | A |
5993001 | Bursell et al. | Nov 1999 | A |
6086205 | Svetliza | Jul 2000 | A |
6112114 | Dreher | Aug 2000 | A |
6179421 | Pang | Jan 2001 | B1 |
6257722 | Toh | Jul 2001 | B1 |
6276799 | Van Saarloos | Aug 2001 | B1 |
6296358 | Cornsweet et al. | Oct 2001 | B1 |
6304723 | Kohayakawa | Oct 2001 | B1 |
6309068 | Kohayakawa | Oct 2001 | B1 |
6361167 | Su et al. | Mar 2002 | B1 |
6659613 | Applegate et al. | Dec 2003 | B2 |
Number | Date | Country |
---|---|---|
WO9617545 | Dec 1995 | WO |
Number | Date | Country | |
---|---|---|---|
20040263784 A1 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
60218757 | Jul 2000 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09649462 | Aug 2000 | US |
Child | 10311492 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10311492 | US | |
Child | 10867523 | US |