Systems and methods to improve clarity in ultrasound images

Information

  • Patent Grant
  • 8221322
  • Patent Number
    8,221,322
  • Date Filed
    Wednesday, February 28, 2007
    17 years ago
  • Date Issued
    Tuesday, July 17, 2012
    12 years ago
Abstract
Systems, methods, and devices for image clarity of ultrasound-based images are described wherein motion sections are compensated with the still sections of the region of interest. Velocity map analysis of regions-of-interest is determined to compensate for instrument motion from motions attributable to structures within the region of interest. Methods include image processing algorithms applied to collected echogenic data sets and the dispensers that use apply air-expunged sonic coupling mediums.
Description
FIELD OF THE INVENTION

Embodiments of the invention pertain to the field of improving the clarity of ultrasound images. Other embodiments of this invention relate to visualization methods and systems, and more specifically to systems and methods for visualizing the trajectory of a cannula or needle being inserted in a biologic subject.


BACKGROUND OF THE INVENTION

The clarity of ultrasound acquired images is affected by motions of the examined subject, the motions of organs and fluids within the examined subject, the motion of the probing ultrasound transceiver, the coupling medium used transceiver and the examined subject, and the algorithms used for image processing. As regards image processing frequency domain approaches have been utilized in the literature including using Wiener filters that is implemented in the frequency domain and assumes that the point spread function (PSF) is fixed and known. This assumption conflicts with the observation that the received ultrasound signals are usually non-stationary and depth-dependent. Since the algorithm is implemented in the frequency domain, the error introduced in PSF will leak across the spatial domain. As a result, the performance of Wiener filtering is not ideal.


As regards prior uses of coupling mediums, the most common container for dispensing ultrasound coupling gel is an 8 oz. plastic squeeze bottle with an open, tapered tip. The tapered tip bottle is inexpensive and easy to refill from a larger reservoir in the form of a bag or pump-type and dispenses gel in a controlled manner. Other embodiments include the Sontac® ultrasound gel pad available from Verathon™ Medical, Bothell, Wash., USA is a pre-packaged, circular pad of moist, flexible coupling gel 2.5 inches in diameter and 0.06 inches thick and is advantageously used with the BladderScan devices. The Sontac pad is simple to apply and to remove, and provides adequate coupling for a one-position ultrasound scan in most cases. Yet others include the Aquaflex® gel pads perform in a similar manner to Sontac pads, but are larger and thicker (2 cm thick×9 cm diameter), and traditionally used for therapeutic ultrasound or where some distance between the probe and the skin surface (“stand-off”) must be maintained.


The main purpose of an ultrasonic coupling medium is to provide an air-free interface between an ultrasound transducer and the body surface. Gels are used as coupling media since they are moist and deformable, but not runny: they wet both the transducer and the body surface, but stay where they are applied. The most common delivery method for ultrasonic coupling gel, the plastic squeeze bottle, has several disadvantages. First, if the bottle has been stored upright the gel will fall to the bottom of the bottle, and vigorous shaking is required to get the gel back to the bottle tip, especially if the gel is cold. This motion can be particularly irritating to sonographers, who routinely suffer from wrist and arm pain from ultrasound scanning. Second, the bottle tip is a two-way valve: squeezing the bottle releases gel at the tip, but releasing the bottle sucks air back into the bottle and into the gel. The presence of air bubbles in the gel may detract from its performance as a coupling medium. Third, there is no standard application amount: inexperienced users such as Diagnostic Ultrasound customers have to make an educated guess about how much gel to use. Fourth, when the squeeze bottle is nearly empty it is next to impossible to coax the final 5-10% of gel into the bottle's tip for dispensing. Finally, although refilling the bottle from a central source is not a particularly difficult task, it is non-sterile and potentially messy.


Sontac pads and other solid gel coupling pads are simpler to use than gel: the user does not have to guess at an appropriate application amount, the pad is sterile, and it can be simply lifted off the patient and disposed of after use. However, pads do not mold to the skin or transducer surface as well as the more liquefied coupling gels and therefore may not provide ideal coupling when used alone, especially on dry, hairy, curved, or wrinkled surfaces. Sontac pads suffer from the additional disadvantage that they are thin and easily damaged by moderate pressure from the ultrasound transducer. (See Bishop S, Draper D O, Knight K L, Feland J B, Eggett D. “Human tissue-temperature rise during ultrasound treatments with the Aquaflex gel pad.” Journal of Athletic Training 39(2):126-131, 2004).


Relating to cannula insertion, unsuccessful insertion and/or removal of a cannula, a needle, or other similar devices into vascular tissue may cause vascular wall damage that may lead to serious complications or even death. Image guided placement of a cannula or needle into the vascular tissue reduces the risk of injury and increases the confidence of healthcare providers in using the foregoing devices. Current image guided placement methods generally use a guidance system for holding specific cannula or needle sizes. The motion and force required to disengage the cannula from the guidance system may, however, contribute to a vessel wall injury, which may result in extravasation. Complications arising from extravasation resulting in morbidity are well documented. Therefore, there is a need for image guided placement of a cannula or needle into vascular tissue while still allowing a health care practitioner to use standard “free” insertion procedures that do not require a guidance system to hold the cannula or needle.


SUMMARY OF THE PARTICULAR EMBODIMENTS

Systems, methods, and devices for image clarity of ultrasound-based images are described. Such systems, methods, and devices include improved transducer aiming and utilizing time-domain deconvolution processes upon the non-stationary effects of ultrasound signals. The processes deconvolution applies algorithms to improve the clarity or resolution of ultrasonic images by suppressed reverberation of ultrasound echoes. The initially acquired and distorted ultrasound image is reconstructed to a clearer image by countering the effect of distortion operators. An improved point spread function (PSF) of the imaging system is applied, utilizing a deconvolution algorithm, to improve the image resolution, and remove reverberations by modeling them as noise.


As regards improved transducer aiming particular embodiments employ novel applications of computer vision techniques to perform real time analysis. First, a computer vision method is introduced: optical flow, which is a powerful motion analysis technique and is applied in many different research or commercial fields. The optical flow is able to estimate the velocity field of image series and the velocity vector provides information of the contents inside the image series. In the current field, if the target is with very large motion and the motion is in a specific pattern, like moving orientation, the velocity information inside and around the target can be different from other parts in the field. Otherwise, there will be no valuable information in current field and the scanning has to be adjusted.


As regards analyzing the motions of organ movement and fluid flows within an examined subject, new optical-flow-based methods for estimating heart motion from two-dimensional echocardiographic sequences, an optical-flow guided active contour method for Myocardial tracking in contrast echocardiography, and a method for shape-driven segmentation and tracking of the left ventricle.


As regards cannula insertion, ultrasound motion of the cannula is configured by cannula fitted with echogenic ultrasound micro reflectors.


As regards sonic coupling gel media to improve ultrasound communication between a transducer and the examined subject, embodiments include an apparatus that: dispenses a metered quantity of ultrasound coupling gel and enables one-handed gel application. The apparatus also preserves the gel in a de-gassed state (no air bubbles), preserves the gel in a sterile state (no contact between gel applicator and patient), includes a method for easy container refill, and preserves the shape and volume of existing gel application bottles.





BRIEF DESCRIPTION OF THE DRAWINGS

The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided by the Patent and Trademark Office upon request and payment of the necessary fee. Embodiments for the system and method to develop, present, and use clarity enhanced ultrasound images is described below.



FIGS. 1A-D depicts a partial schematic and a partial isometric view of a transceiver, a scan cone comprising a rotational array of scan planes, and a scan plane of the array of an ultrasound harmonic imaging system;



FIG. 2 depicts a partial schematic and partial isometric and side view of a transceiver, and a scan cone array comprised of 3D-distributed scan lines in alternate embodiment of an ultrasound harmonic imaging system;



FIG. 3 is a schematic illustration of a server-accessed local area network in communication with a plurality of ultrasound harmonic imaging systems;



FIG. 4 is a schematic illustration of the Internet in communication with a plurality of ultrasound harmonic imaging systems;



FIG. 5 schematically depicts a method flow chart algorithm 120 to acquire a clarity enhanced ultrasound image;



FIG. 6 is an expansion of sub-algorithm 150 of master algorithm 120 of FIG. 7;



FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5;



FIG. 8 is an expansion of sub-algorithm 300 of master algorithm illustrated in FIG. 5;



FIG. 9 is an expansion of sub-algorithms 400A and 400B of FIG. 5;



FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5;



FIG. 11 depicts a logarithm of a Cepstrum;



FIGS. 12A-C depict histogram waveform plots derived from water tank pulse-echo experiments undergoing parametric and non-parametric analysis;



FIGS. 13-25 are bladder sonograms that depict image clarity after undergoing image enhancement processing by algorithms described in FIGS. 5-10;



FIG. 13 is an unprocessed image that will undergo image enhancement processing;



FIG. 14 illustrates an enclosed portion of a magnified region of FIG. 13;



FIG. 15 is the resultant image of FIG. 13 that has undergone image processing via nonparametric estimation under sub-algorithm 400A;



FIG. 16 is the resultant image of FIG. 13 that has undergone image processing via parametric estimation under sub-algorithm 400B;



FIG. 17 the resultant image of an alternate image processing embodiment using a Weiner filter.



FIG. 18 is another unprocessed image that will undergo image enhancement processing;



FIG. 19 illustrates an enclosed portion of a magnified region of FIG. 18;



FIG. 20 is the resultant image of FIG. 18 that has undergone image processing via nonparametric estimation under sub-algorithm 400A;



FIG. 21 is the resultant image of FIG. 18 that has undergone image processing via parametric estimation under sub-algorithm 400B;



FIG. 22 is another unprocessed image that will undergo image enhancement processing;



FIG. 23 illustrates an enclosed portion of a magnified region of FIG. 22;



FIG. 24 is the resultant image of FIG. 22 that has undergone image processing via nonparametric estimation under sub-algorithm 400A;



FIG. 25 is the resultant image of FIG. 22 that has undergone image processing via parametric estimation under sub-algorithm 400B;



FIG. 26 depicts a schematic example of a time velocity map derived from sub-algorithm 310;



FIG. 27 depicts another schematic example of a time velocity map derived from sub-algorithm 310;



FIG. 28 illustrates a seven panel image series of a beating heart ventricle that will undergo the optical flow processes of sub-algorithm 300 in which at least two images are required;



FIG. 29 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 presented in a 2D flow pattern after undergoing sub-algorithm 310;



FIG. 30 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the X-axis direction or phi direction after undergoing sub-algorithm 310;



FIG. 31 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the Y-axis direction radial direction after undergoing sub-algorithm 310;



FIG. 32 illustrates a 3D optical vector plot after undergoing sub-algorithm 310 and corresponds to the top row of FIG. 29;



FIG. 33 illustrates a 3D optical vector plot in the phi direction after undergoing sub-algorithm 310 and corresponds to FIG. 30 at T=1;



FIG. 34 illustrates a 3D optical vector plot in the radial direction after undergoing sub-algorithm 310 and corresponds to FIG. 31 at T=1;



FIG. 35 illustrates a 3D optical vector plot in the radial direction above a Y-axis threshold setting of 0.6 after undergoing sub-algorithm 310 and corresponds to FIG. 34 the threshold T that are less than 0.6 are set to 0;



FIGS. 36A-G depicts embodiments of the sonic gel dispenser;



FIGS. 37 and 38 are diagrams showing one embodiment of the present invention;



FIG. 39 is diagram showing additional detail for a needle shaft to be used with one embodiment of the invention;



FIGS. 40A and 40B are diagrams showing close-up views of surface features of the needle shaft shown in FIG. 38;



FIG. 41 is a diagram showing imaging components for use with the needle shaft shown in FIG. 38;



FIG. 42 is a diagram showing a representation of an image produced by the imaging components shown in FIG. 41;



FIG. 43 is a system diagram of an embodiment of the present invention;



FIG. 44 is a system diagram of an example embodiment showing additional detail for one of the components shown in FIG. 38; and



FIGS. 45 and 46 are flowcharts of a method of displaying the trajectory of a cannula in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION OF THE PARTICULAR EMBODIMENTS

Systems, methods, and devices for image clarity of ultrasound-based images are described and illustrated in the following figures. The clarity of ultrasound imaging requires the efficient coordination of ultrasound transfer or communication to and from an examined subject, image acquisition from the communicated ultrasound, and microprocessor based image processing. Oftentimes the examined subject moves while image acquisition occurs, the ultrasound transducer moves, and/or movement occurs within the scanned region of interest that requires refinements as described below to secure clear images.


The ultrasound transceivers or DCD devices developed by Diagnostic Ultrasound are capable of collecting in vivo three-dimensional (3-D) cone-shaped ultrasound images of a patient. Based on these 3-D ultrasound images, various applications have been developed such as bladder volume and mass estimation.


During the data collection process initiated by DCD, a pulsed ultrasound field is transmitted into the body, and the back-scattered “echoes” are detected as a one-dimensional (1-D) voltage trace, which is also referred to as a RF line. After envelope detection, a set of 1-D data samples is interpolated to form a two-dimensional (2-D) or 3-D ultrasound image.



FIGS. 1A-D depicts a partial schematic and a partial isometric view of a transceiver, a scan cone comprising a rotational array of scan planes, and a scan plane of the array of various ultrasound harmonic imaging systems 60A-D illustrated in FIGS. 3 and 4 below.



FIG. 1A is a side elevation view of an ultrasound transceiver 10A that includes an inertial reference unit, according to an embodiment of the invention. The transceiver 10A includes a transceiver housing 18 having an outwardly extending handle 12 suitably configured to allow a user to manipulate the transceiver 10A relative to a patient. The handle 12 includes a trigger 14 that allows the user to initiate an ultrasound scan of a selected anatomical portion, and a cavity selector 16. The cavity selector 16 will be described in greater detail below. The transceiver 10A also includes a transceiver dome 20 that contacts a surface portion of the patient when the selected anatomical portion is scanned. The dome 20 generally provides an appropriate acoustical impedance match to the anatomical portion and/or permits ultrasound energy to be properly focused as it is projected into the anatomical portion. The transceiver 10A further includes one, or preferably an array of separately excitable ultrasound transducer elements (not shown in FIG. 1A) positioned within or otherwise adjacent with the housing 18. The transducer elements may be suitably positioned within the housing 18 or otherwise to project ultrasound energy outwardly from the dome 20, and to permit reception of acoustic reflections generated by internal structures within the anatomical portion. The one or more array of ultrasound elements may include a one-dimensional, or a two-dimensional array of piezoelectric elements that may be moved within the housing 18 by a motor. Alternately, the array may be stationary with respect to the housing 18 so that the selected anatomical region may be scanned by selectively energizing the elements in the array.


A directional indicator panel 22 includes a plurality of arrows that may be illuminated for initial targeting and guiding a user to access the targeting of an organ or structure within an ROI. In particular embodiments if the organ or structure is centered from placement of the transceiver 10A acoustically placed against the dermal surface at a first location of the subject, the directional arrows may be not illuminated. If the organ is off-center, an arrow or set of arrows may be illuminated to direct the user to reposition the transceiver 10A acoustically at a second or subsequent dermal location of the subject. The acrostic coupling may be achieved by liquid sonic gel applied to the skin of the patient or by sonic gel pads to which the transceiver dome 20 is placed against. The directional indicator panel 22 may be presented on the display 54 of computer 52 in harmonic imaging subsystems described in FIGS. 3 and 4 below, or alternatively, presented on the transceiver display 16.


Transceiver 10A includes an inertial reference unit that includes an accelerometer 22 and/or gyroscope 23 positioned preferably within or adjacent to housing 18. The accelerometer 22 may be operable to sense an acceleration of the transceiver 10A, preferably relative to a coordinate system, while the gyroscope 23 may be operable to sense an angular velocity of the transceiver 10A relative to the same or another coordinate system. Accordingly, the gyroscope 23 may be of conventional configuration that employs dynamic elements, or it may be an optoelectronic device, such as the known optical ring gyroscope. In one embodiment, the accelerometer 22 and the gyroscope 23 may include a commonly packaged and/or solid-state device. One suitable commonly packaged device may be the MT6 miniature inertial measurement unit, available from Omni Instruments, Incorporated, although other suitable alternatives exist. In other embodiments, the accelerometer 22 and/or the gyroscope 23 may include commonly packaged micro-electromechanical system (MEMS) devices, which are commercially available from MEMSense, Incorporated. As described in greater detail below, the accelerometer 22 and the gyroscope 23 cooperatively permit the determination of positional and/or angular changes relative to a known position that is proximate to an anatomical region of interest in the patient. Other configurations related to the accelerometer 22 and gyroscope 23 concerning transceivers 10A,B equipped with inertial reference units and the operations thereto may be obtained from copending U.S. patent application Ser. No. 11/222,360 filed Sep. 8, 2005, herein incorporated by reference.


The transceiver 10A includes (or if capable at being in signal communication with) a display 24 operable to view processed results from an ultrasound scan, and/or to allow an operational interaction between the user and the transceiver 10A. For example, the display 24 may be configured to display alphanumeric data that indicates a proper and/or an optimal position of the transceiver 10A relative to the selected anatomical portion. Display 24 may be used to view two- or three-dimensional images of the selected anatomical region. Accordingly, the display 24 may be a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, or other suitable display devices operable to present alphanumeric data and/or graphical images to a user.


Still referring to FIG. 1A, a cavity selector 16 may be operable to adjustably adapt the transmission and reception of ultrasound signals to the anatomy of a selected patient. In particular, the cavity selector 16 adapts the transceiver 10A to accommodate various anatomical details of male and female patients. For example, when the cavity selector 16 is adjusted to accommodate a male patient, the transceiver 10A may be suitably configured to locate a single cavity, such as a urinary bladder in the male patient. In contrast, when the cavity selector 16 is adjusted to accommodate a female patient, the transceiver 10A may be configured to image an anatomical portion having multiple cavities, such as a bodily region that includes a bladder and a uterus. Alternate embodiments of the transceiver 10A may include a cavity selector 16 configured to select a single cavity scanning mode, or a multiple cavity-scanning mode that may be used with male and/or female patients. The cavity selector 16 may thus permit a single cavity region to be imaged, or a multiple cavity region, such as a region that includes a lung and a heart to be imaged.


To scan a selected anatomical portion of a patient, the transceiver dome 20 of the transceiver 10A may be positioned against a surface portion of a patient that is proximate to the anatomical portion to be scanned. The user actuates the transceiver 10A by depressing the trigger 14. In response, the transceiver 10 transmits ultrasound signals into the body, and receives corresponding return echo signals that may be at least partially processed by the transceiver 10A to generate an ultrasound image of the selected anatomical portion. In a particular embodiment, the transceiver 10A transmits ultrasound signals in a range that extends from approximately about two megahertz (MHz) to approximately about ten MHz.


In one embodiment, the transceiver 10A may be operably coupled to an ultrasound system that may be configured to generate ultrasound energy at a predetermined frequency and/or pulse repetition rate and to transfer the ultrasound energy to the transceiver 10A. The system also includes a processor that may be configured to process reflected ultrasound energy that is received by the transceiver 10A to produce an image of the scanned anatomical region. Accordingly, the system generally includes a viewing device, such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display device, or other similar display devices, that may be used to view the generated image. The system may also include one or more peripheral devices that cooperatively assist the processor to control the operation of the transceiver 10A, such a keyboard, a pointing device, or other similar devices. In still another particular embodiment, the transceiver 10A may be a self-contained device that includes a microprocessor positioned within the housing 18 and software associated with the microprocessor to operably control the transceiver 10A, and to process the reflected ultrasound energy to generate the ultrasound image. Accordingly, the display 24 may be used to display the generated image and/or to view other information associated with the operation of the transceiver 10A. For example, the information may include alphanumeric data that indicates a preferred position of the transceiver 10A prior to performing a series of scans. In yet another particular embodiment, the transceiver 10A may be operably coupled to a general-purpose computer, such as a laptop or a desktop computer that includes software that at least partially controls the operation of the transceiver 10A, and also includes software to process information transferred from the transceiver 10A, so that an image of the scanned anatomical region may be generated. The transceiver 10A may also be optionally equipped with electrical contacts to make communication with receiving cradles 50 as discussed in FIGS. 3 and 4 below. Although transceiver 10A of FIG. 1A may be used in any of the foregoing embodiments, other transceivers may also be used. For example, the transceiver may lack one or more features of the transceiver 10A. For example, a suitable transceiver need not be a manually portable device, and/or need not have a top-mounted display, and/or may selectively lack other features or exhibit further differences.


Referring still to FIG. 1A is a graphical representation of a plurality of scan planes that form a three-dimensional (3D) array having a substantially conical shape. An ultrasound scan cone 40 formed by a rotational array of two-dimensional scan planes 42 projects outwardly from the dome 20 of the transceivers 10A. The other transceiver embodiments 10B-10E may also be configured to develop a scan cone 40 formed by a rotational array of two-dimensional scan planes 42. The pluralities of scan planes 40 may be oriented about an axis 11 extending through the transceivers 10A-10E. One or more, or preferably each of the scan planes 42 may be positioned about the axis 11, preferably, but not necessarily at a predetermined angular position θ. The scan planes 42 may be mutually spaced apart by angles θ1 and θ2. Correspondingly, the scan lines within each of the scan planes 42 may be spaced apart by angles φ1 and φ2. Although the angles θ1 and θ2 are depicted as approximately equal, it is understood that the angles θ1 and θ2 may have different values. Similarly, although the angles φ1 and φ2 are shown as approximately equal, the angles φ1 and φ2 may also have different angles. Other scan cone configurations are possible. For example, a wedge-shaped scan cone, or other similar shapes may be generated by the transceiver 10A, 10B and 10C.



FIG. 1B is a graphical representation of a scan plane 42. The scan plane 42 includes the peripheral scan lines 44 and 46, and an internal scan line 48 having a length r that extends outwardly from the transceivers 10A-10E. Thus, a selected point along the peripheral scan lines 44 and 46 and the internal scan line 48 may be defined with reference to the distance r and angular coordinate values φ and θ. The length r preferably extends to approximately 18 to 20 centimeters (cm), although any length is possible. Particular embodiments include approximately seventy-seven scan lines 48 that extend outwardly from the dome 20, although any number of scan lines is possible.



FIG. 1C a graphical representation of a plurality of scan lines emanating from a hand-held ultrasound transceiver forming a single scan plane 42 extending through a cross-section of an internal bodily organ. The number and location of the internal scan lines emanating from the transceivers 10A-10E within a given scan plane 42 may thus be distributed at different positional coordinates about the axis line 11 as required to sufficiently visualize structures or images within the scan plane 42. As shown, four portions of an off-centered region-of-interest (ROI) are exhibited as irregular regions 49. Three portions may be viewable within the scan plane 42 in totality, and one may be truncated by the peripheral scan line 44.


As described above, the angular movement of the transducer may be mechanically effected and/or it may be electronically or otherwise generated. In either case, the number of lines 48 and the length of the lines may vary, so that the tilt angle φ sweeps through angles approximately between −60° and +60° for a total arc of approximately 120°. In one particular embodiment, the transceiver 10 may be configured to generate approximately about seventy-seven scan lines between the first limiting scan line 44 and a second limiting scan line 46. In another particular embodiment, each of the scan lines has a length of approximately about 18 to 20 centimeters (cm). The angular separation between adjacent scan lines 48 (FIG. 1B) may be uniform or non-uniform. For example, and in another particular embodiment, the angular separation φ1 and φ2 (as shown in FIG. 5C) may be about 1.5°. Alternately, and in another particular embodiment, the angular separation φ1 and φ2 may be a sequence wherein adjacent angles may be ordered to include angles of 1.5°, 6.8°, 15.5°, 7.2°, and so on, where a 1.5° separation is between a first scan line and a second scan line, a 6.8° separation is between the second scan line and a third scan line, a 15.5° separation is between the third scan line and a fourth scan line, a 7.2° separation is between the fourth scan line and a fifth scan line, and so on. The angular separation between adjacent scan lines may also be a combination of uniform and non-uniform angular spacings, for example, a sequence of angles may be ordered to include 1.5°, 1.5°, 1.5°, 7.2°, 14.3°, 20.2°, 8.0°, 8.0°, 8.0°, 4.3°, 7.8°, and so on.



FIG. 1D is an isometric view of an ultrasound scan cone that projects outwardly from the transceivers of FIGS. 1A-E. Three-dimensional mages of a region of interest may be presented within a scan cone 40 that comprises a plurality of 2D images formed in an array of scan planes 42. A dome cutout 41 that is the complementary to the dome 20 of the transceivers 10A-10E is shown at the top of the scan cone 40.



FIG. 2 depicts a partial schematic and partial isometric and side view of a transceiver, and a scan cone array comprised of 3D-distributed scan lines in alternate embodiment of an ultrasound harmonic imaging system. A plurality of three-dimensional (3D) distributed scan lines emanating from a transceiver that cooperatively forms a scan cone 30. Each of the scan lines have a length r that projects outwardly from the transceivers 10A-10E of FIGS. 1A-1E. As illustrated the transceiver 10A emits 3D-distributed scan lines within the scan cone 30 that may be one-dimensional ultrasound A-lines. The other transceiver embodiments 10B-10E may also be configured to emit 3D-distributed scan lines. Taken as an aggregate, these 3D-distributed A-lines define the conical shape of the scan cone 30. The ultrasound scan cone 30 extends outwardly from the dome 20 of the transceiver 10A, 10B and 10C centered about an axis line 11. The 3D-distributed scan lines of the scan cone 30 include a plurality of internal and peripheral scan lines that may be distributed within a volume defined by a perimeter of the scan cone 30. Accordingly, the peripheral scan lines 31A-31F define an outer surface of the scan cone 30, while the internal scan lines 34A-34C may be distributed between the respective peripheral scan lines 31A-31F. Scan line 34B may be generally collinear with the axis 11, and the scan cone 30 may be generally and coaxially centered on the axis line 11.


The locations of the internal and peripheral scan lines may be further defined by an angular spacing from the center scan line 34B and between internal and peripheral scan lines. The angular spacing between scan line 34B and peripheral or internal scan lines may be designated by angle Φ and angular spacings between internal or peripheral scan lines may be designated by angle Ø. The angles Φ1, Φ2, and Φ3 respectively define the angular spacings from scan line 34B to scan lines 34A, 34C, and 31D. Similarly, angles Ø1, Ø2, and Ø3 respectively define the angular spacings between scan line 31B and 31C, 31C and 34A, and 31D and 31E.


With continued reference to FIG. 2, the plurality of peripheral scan lines 31A-E and the plurality of internal scan lines 34A-D may be three dimensionally distributed A-lines (scan lines) that are not necessarily confined within a scan plane, but instead may sweep throughout the internal regions and along the periphery of the scan cone 30. Thus, a given point within the scan cone 30 may be identified by the coordinates r, Φ, and Ø whose values generally vary. The number and location of the internal scan lines emanating from the transceivers 10A-10E may thus be distributed within the scan cone 30 at different positional coordinates as required to sufficiently visualize structures or images within a region of interest (ROI) in a patient. The angular movement of the ultrasound transducer within the transceiver 10A-10E may be mechanically effected, and/or it may be electronically generated. In any case, the number of lines and the length of the lines may be uniform or otherwise vary, so that angle Φ sweeps through angles approximately between −60° between scan line 34B and 31A, and +60° between scan line 34B and 31B. Thus angle Φ in this example presents a total arc of approximately 120°. In one embodiment, the transceiver 10A, 10B and 10C may be configured to generate a plurality of 3D-distributed scan lines within the scan cone 30 having a length r of approximately 18 to 20 centimeters (cm).



FIG. 3 is a schematic illustration of a server-accessed local area network in communication with a plurality of ultrasound harmonic imaging systems. An ultrasound harmonic imaging system 100 includes one or more personal computer devices 52 that may be coupled to a server 56 by a communications system 55. The devices 52 may be, in turn, coupled to one or more ultrasound transceivers 10A and/or 10B, for examples the ultrasound harmonic sub-systems 60A-60D. Ultrasound based images of organs or other regions of interest derived from either the signals of echoes from fundamental frequency ultrasound and/or harmonics thereof, may be shown within scan cone 30 or 40 presented on display 54. The server 56 may be operable to provide additional processing of ultrasound information, or it may be coupled to still other servers (not shown in FIG. 3) and devices. Transceivers 10A or 10B may be in wireless communication with computer 52 in sub-system 60A, in wired signal communication in sub-system 10B, in wireless communication with computer 52 via receiving cradle 50 in sub-system 10C, or in wired communication with computer 52 via receiving cradle 50 in sub-system 10D.



FIG. 4 is a schematic illustration of the Internet in communication with a plurality of ultrasound harmonic imaging systems. An Internet system 110 may be coupled or otherwise in communication with the ultrasound harmonic sub-systems 60A-60D.



FIG. 5 schematically depicts a master method flow chart algorithm 120 to acquire a clarity enhanced ultrasound image. Algorithm 120 begins with process block 150, in which an acoustic coupling or sonic gel is applied to the dermal surface near the region-of-interest (ROI) using a degassing gel dispenser. Embodiments illustrating the degassing gel dispenser and its uses are depicted in FIGS. 36A-G below. After applying the sonic gel, decision diamond 170 is reached with the query “Targeting a moving structure?”, and if negative to this query, algorithm 120 continues to process block 200. At process block 200, the ultrasound transceiver dome 20 of transceivers 10A,B are placed into the dermal residing sonic gel to and pulsed ultrasound energy is transmitted to the ROI. Thereafter, echoes of the fundamental ultrasound frequency and/or harmonics thereof are captured by the transceiver 10A,B and converted to echogenic signals. If the answer to decision diamond is affirmative for targeting a moving structure within the ROI, the ROI is re-targeted, at process block 300, using optical flow real-time analysis.


Whether receiving echogenic signals from non-moving targets within the ROI from processing block 200, or moving targets within the ROI from process block 300, algorithm 120 continues with processing blocks 400A or 400B. Processing blocks 400A and 400B process echogenic datasets of the echogenic signals from process blocks 200 and 300 using a point spread function algorithms to compensate or otherwise suppress motion induced reverberations within the ROI echogenic data sets. Processing block 400A employs nonparametric analysis, and processing block 400B employs parametric analysis and described in FIG. 9 below. Once motion artifacts are corrected, algorithm 120 continues with processing block 50 to segment image sections derived from the distortion-compensated data sets. At process block 600, areas of the segmented sections within 2D images and/or 3D volumes are determined. Thereafter, master algorithm 120 completes at process block 700 in which segmented structures within the static or moving ROI is displayed along with any segmented section area and/or volume measurements.



FIG. 6 is an expansion of sub-algorithm 150 of master algorithm 120 of FIG. 7. Beginning from the entry point of master algorithm 120, sub-algorithm 150 starts at process block 152 wherein a metered volume of sonic gel is applied from the volume-controlled dispenser to the dermal surface believed to overlap the ROI. Thereafter, at process block 154, any gas pockets within the applied gel are expelled by a roller pressing action. Sub-algorithm 150 is then completed and exits to sub-algorithm 200.



FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5. Entering from process block 154, sub-algorithm 200 starts at process block 202 wherein the transceiver dome 20 of transceivers 10A,B are placed into the gas-purged sonic gel to get a firm sonic coupling, and then at process block 206, pulsed frequency ultrasound is transmitted to the underlying ROI. Thereafter, at process block 210, ultrasound echoes from the ROI and any intervening structure, is collected by the transceivers 10A,B and converted to the echogenic data sets for presentation of an image of the ROI. Once the image of the ROI is displayed, decision diamond 218 is reached with the query “Are structures of interest SOI sufficiently in view within ROI?”, and if negative to this query, sub-algorithm 200 continues to process block 222 in which the transceiver is moved to a new anatomical location for re-routing to process block 202. At process block 200, the ultrasound transceiver dome 20 of transceivers 10A,B are placed into the dermal residing sonic gel to and pulsed ultrasound energy is transmitted to the ROI. If the answer to the decision diamond 218 is affirmative for a sufficiently viewed SOI, sub-algorithm 200 continues to process block 226 in which a 3D echogenic data set array of the ROI is acquired using at least one of an ultrasound fundamental and/or harmonic frequency. Sub-algorithm 200 is then completed and exits to sub-algorithm 300.



FIG. 8 is an expansion of sub-algorithm 300 of master algorithm illustrated in FIG. 5. Entering from process block 170, sub-algorithm 300 begins in processing block 302 by making a transceiver 10A,B-to-ROI sonic coupling similar to process block 202, transmitting pulse frequency ultrasound at process block 306, and thereafter, at processing block 310, acquire ultrasound echoes, convert to echogenic data sets, and present a currently displayed image “i” of the ROI and compare “i” with any predecessor image “i−1” of the ROI, if available. Thereafter, at process block 314, pixel movement along Cartesian axes is ascertained to determine X and Y-axis pixel center-of-optical flow, and similarly, followed by process block 318 pixel movement along the phi angle to ascertain a rotational center-of-optical flow. Thereafter, at process block 322, optical flow velocity maps to ascertain whether axial and rotational vectors exceed a pre-defined threshold OFR value. Once the velocity maps are obtained, decision diamond 326 is reached with the query “Does optical flow velocity map match the expected pattern for the structure being imaged?”, and if negative, sub-algorithm re-routes to process block 306 for retransmission of ultrasound to the ROI via the sonically coupled transceiver 10A,B. If affirmative for a matched velocity map and expected pattern of the structure being imaged, sub-algorithm 300 continues with process block 330 in which a 3D echogenic data set array of the ROI is acquired using at least one of an ultrasound fundamental and/or harmonic frequency. Sub-algorithm 300 is then completed and exits to sub-algorithms 400A and 400B.



FIG. 9 depicts expansion of subalgorithms 400A and 400B of FIG. 5. Sub-algorithm 400A employs nonparametric pulse estimation and 400B employs parametric pulse estimation. Sub-algorithm 400A describes an implementation of the CLEAN algorithm for reducing reverberation and noise in the ultrasound signals and comprises an RF line processing block 400A-2, a non-parametric pulse estimation block 400A-4, a CLEAN iteration block 400A-6, a decision diamond block 400A-8 having the query “STOP?”, and a Scan Convert processing block 400A-10. The same algorithm is applied to each RF line in a scan plane, but each RF line uses its own unique estimate of the point spread function of the transducer (or pulse estimate). The algorithm is iterative by re-routing to Non parametric pulse estimation block 400A-4 in that the point spread function is estimated, the CLEAN sub-algorithm applied and then the pulse is re-estimated from the output of the CLEAN sub-algorithm. The iterations are stopped after a maximum number of iterations is reached or the changes in the signal are sufficiently small. Thereafter, once the iteration has stopped, the signals are converted for presentation as part of a scan plane image at process block 400A-10. Sub-algorithm 400A is then completed and exits to sub-algorithms 500.


Referring to sub-algorithm 400B, parametric analysis employs an implementation of the CLEAN algorithm that is not iterative. Sub-algorithm 400B comprise comprises an RF line processing block 400B-2, a parametric pulse estimation block 400B-4, a CLEAN algorithm block 400B-6, a CLEAN iteration block 400B-8, and a Scan Convert processing block 400B-10. The point spread function of the transducer is estimated once and becomes a priori information used in the CLEAN algorithm. A single estimate of the pulse is applied to all RF lines in a scan plane and the CLEAN algorithm is applied once to each line. The signal output is then converted for presentation as part of a scan plane image at process block 400B-10. Sub-algorithm 400B is then completed and exits to sub-algorithms 500.



FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5. 3D data sets from processing blocks 400A-10 or 400B-10 of sub-algorithms 400A or 400B are entered at input data process block 504 that then undergoes a 2-step image enhancement procedure at process block 506. The 2-step image enhancement includes performing a heat filter to reduce noise followed by a shock filter to sharpen edges of structures within the 3D data sets. The heat and shock filters are partial differential equations (PDE) defined respectively in Equations E1 and E2 below:






E





1


:










u



t


=





2


u




x
2



+





2


u




y
2









(

Heat





Filter

)









E





2


:










u



t


=


-

F


(

l


(
u
)


)








u









(

Shock





Filter

)






Here u in the heat filter represents the image being processed. The image u is 2D, and is comprised of an array of pixels arranged in rows along the x-axis, and an array of pixels arranged in columns along the y-axis. The pixel intensity of each pixel in the image u has an initial input image pixel intensity (I) defined as u0=I. The value of I depends on the application, and commonly occurs within ranges consistent with the application. For example, I can be as low as 0 to 1, or occupy middle ranges between 0 to 127 or 0 to 512. Similarly, I may have values occupying higher ranges of 0 to 1024 and 0 to 4096, or greater. For the shock filter u represents the image being processed whose initial value is the input image pixel intensity (I): u0=I where the l(u) term is the Laplacian of the image u, F is a function of the Laplacian, and ∥∇u∥ is the 2D gradient magnitude of image intensity defined by equation E3:

∥∇u∥=√{square root over (ux2+uy2)}  E3


Where u2x=the square of the partial derivative of the pixel intensity (u) along the x-axis, u2y=the square of the partial derivative of the pixel intensity (u) along the y-axis, the Laplacian l(u) of the image, u, is expressed in equation E4:

l(u)=uxxux2+2uxyuxuy+uyyuy2  E4


Equation E9 relates to equation E6 as follows:

    • Ux is the first partial derivative








u



x






of u along the x-axis,

    • Ux Uy is the first partial derivative








u



y






of u along the y-axis,

    • Ux Ux2 is the square of the first partial derivative








u



x






of u along the x-axis,

    • Ux Uy2 is the square of the first partial derivative








u



y






of u along the y-axis,

    • Ux Uxx is the second partial derivative









2


u




x
2







of u along the x-axis,

    • Ux Uyy is the second partial derivative









2


u




y
2







of u along the y-axis,

    • Uxy is cross multiple first partial derivative








u




x






d





y






of u along the x and y axes, and

    • Uxy the sign of the function F modifies the Laplacian by the image gradient values selected to avoid placing spurious edges at points with small gradient values:











F


(

l


(
u
)


)


=
1

,






if






l


(
u
)



>

0





and













u




>
t








=

-
1


,






if






l


(
u
)



<


0









and













u




>
t








=
0

,

otherwise










    • where t is a threshold on the pixel gradient value ∥∇u∥.





The combination of heat filtering and shock filtering produces an enhanced image ready to undergo the intensity-based and edge-based segmentation algorithms as discussed below. The enhanced 3D data sets are then subjected to a parallel process of intensity-based segmentation at process block 510 and edge-based segmentation at process block 512. The intensity-based segmentation step uses a “k-means” intensity clustering technique where the enhanced image is subjected to a categorizing “k-means” clustering algorithm. The “k-means” algorithm categorizes pixel intensities into white, gray, and black pixel groups. Given the number of desired clusters or groups of intensities (k), the k-means algorithm is an iterative algorithm comprising four steps: Initially determine or categorize cluster boundaries by defining a minimum and a maximum pixel intensity value for every white, gray, or black pixels into groups or k-clusters that are equally spaced in the entire intensity range. Assign each pixel to one of the white, gray or black k-clusters based on the currently set cluster boundaries. Calculate a mean intensity for each pixel intensity k-cluster or group based on the current assignment of pixels into the different k-clusters. The calculated mean intensity is defined as a cluster center. Thereafter, new cluster boundaries are determined as mid points between cluster centers. The fourth and final step of intensity-based segmentation determines if the cluster boundaries significantly change locations from their previous values. Should the cluster boundaries change significantly from their previous values, iterate back to step 2, until the cluster centers do not change significantly between iterations. Visually, the clustering process is manifest by the segmented image and repeated iterations continue until the segmented image does not change between the iterations.


The pixels in the cluster having the lowest intensity value—the darkest cluster—are defined as pixels associated with internal cavity regions of bladders. For the 2D algorithm, each image is clustered independently of the neighboring images. For the 3D algorithm, the entire volume is clustered together. To make this step faster, pixels are sampled at 2 or any multiple sampling rate factors before determining the cluster boundaries. The cluster boundaries determined from the down-sampled data are then applied to the entire data.


The edge-based segmentation process block 512 uses a sequence of four sub-algorithms. The sequence includes a spatial gradients algorithm, a hysteresis threshold algorithm, a Region-of-Interest (ROI) algorithm, and a matching edges filter algorithm. The spatial gradient algorithm computes the x-directional and y-directional spatial gradients of the enhanced image. The hysteresis threshold algorithm detects salient edges. Once the edges are detected, the regions defined by the edges are selected by a user employing the ROI algorithm to select regions-of-interest deemed relevant for analysis.


Since the enhanced image has very sharp transitions, the edge points can be easily determined by taking x- and y-derivatives using backward differences along x- and y-directions. The pixel gradient magnitude ∥∇I∥ is then computed from the x- and y-derivative image in equation E5 as:

∥∇I∥=√{square root over (Ix2+Iy2)}  E5


Where I2x=the square of x-derivative of intensity and I2y=the square of y-derivative of intensity along the y-axis.


Significant edge points are then determined by thresholding the gradient magnitudes using a hysteresis thresholding operation. Other thresholding methods could also be used. In hysteresis thresholding, two threshold values, a lower threshold and a higher threshold, are used. First, the image is thresholded at the lower threshold value and a connected component labeling is carried out on the resulting image. Next, each connected edge component is preserved which has at least one edge pixel having a gradient magnitude greater than the upper threshold. This kind of thresholding scheme is good at retaining long connected edges that have one or more high gradient points.


In the preferred embodiment, the two thresholds are automatically estimated. The upper gradient threshold is estimated at a value such that at most 97% of the image pixels are marked as non-edges. The lower threshold is set at 50% of the value of the upper threshold. These percentages could be different in different implementations. Next, edge points that lie within a desired region-of-interest are selected. This region of interest algorithm excludes points lying at the image boundaries and points lying too close to or too far from the transceivers 10A,B. Finally, the matching edge filter is applied to remove outlier edge points and fill in the area between the matching edge points.


The edge-matching algorithm is applied to establish valid boundary edges and remove spurious edges while filling the regions between boundary edges. Edge points on an image have a directional component indicating the direction of the gradient. Pixels in scanlines crossing a boundary edge location can exhibit two gradient transitions depending on the pixel intensity directionality. Each gradient transition is given a positive or negative value depending on the pixel intensity directionality. For example, if the scanline approaches an echo reflective bright wall from a darker region, then an ascending transition is established as the pixel intensity gradient increases to a maximum value, i.e., as the transition ascends from a dark region to a bright region. The ascending transition is given a positive numerical value. Similarly, as the scanline recedes from the echo reflective wall, a descending transition is established as the pixel intensity gradient decreases to or approaches a minimum value. The descending transition is given a negative numerical value.


Valid boundary edges are those that exhibit ascending and descending pixel intensity gradients, or equivalently, exhibit paired or matched positive and negative numerical values. The valid boundary edges are retained in the image. Spurious or invalid boundary edges do not exhibit paired ascending-descending pixel intensity gradients, i.e., do not exhibit paired or matched positive and negative numerical values. The spurious boundary edges are removed from the image.


For bladder cavity volumes, most edge points for blood fluid surround a dark, closed region, with directions pointing inwards towards the center of the region. Thus, for a convex-shaped region, the direction of a gradient for any edge point, the edge point having a gradient direction approximately opposite to the current point represents the matching edge point. Those edge points exhibiting an assigned positive and negative value are kept as valid edge points on the image because the negative value is paired with its positive value counterpart. Similarly, those edge point candidates having unmatched values, i.e., those edge point candidates not having a negative-positive value pair, are deemed not to be true or valid edge points and are discarded from the image.


The matching edge point algorithm delineates edge points not lying on the boundary for removal from the desired dark regions. Thereafter, the region between any two matching edge points is filled in with non-zero pixels to establish edge-based segmentation. In a preferred embodiment of the invention, only edge points whose directions are primarily oriented co-linearly with the scanline are sought to permit the detection of matching front wall and back wall pairs of a bladder cavity, for example the left or right ventricle.


Referring again to FIG. 10, results from the respective segmentation procedures are then combined at process block 514 and subsequently undergoes a cleanup algorithm process at process block 516. The combining process of block 214 uses a pixel-wise Boolean AND operator step to produce a segmented image by computing the pixel intersection of two images. The Boolean AND operation represents the pixels of each scan plane of the 3D data sets as binary numbers and the corresponding assignment of an assigned intersection value as a binary number 1 or 0 by the combination of any two pixels. For example, consider any two pixels, say pixelA and pixelB, which can have a 1 or 0 as assigned values. If pixelA's value is 1, and pixelB's value is 1, the assigned intersection value of pixelA and pixelB is 1. If the binary value of pixelA and pixelB are both 0, or if either pixelA or pixelB is 0, then the assigned intersection value of pixelA and pixelB is 0. The Boolean AND operation takes the binary any two digital images as input, and outputs a third image with the pixel values made equivalent to the intersection of the two input images.


After combining the segmentation results, the combined pixel information in the 3D data sets In a fifth process is cleaned at process block 516 to make the output image smooth and to remove extraneous structures not relevant to bladder cavities. Cleanup 516 includes filling gaps with pixels and removing pixel groups unlikely to be related to the ROI undergoing study, for example pixel groups unrelated to bladder cavity structures. Sub-algorithm 500 is then completed and exits to sub-algorithm 600.



FIG. 11 depicts a logarithm of a Cepstrum. The Cepstrum is used in sub-algorithm 400A for the pulse estimation via application of point spread functions to the echogenic data sets generated by the transceivers 10A,B.



FIGS. 12A-C depict histogram waveform plots derived from water tank pulse-echo experiments undergoing parametric and non-parametric analysis. FIG. 12A is a measure plot. FIG. 12B is a nonparametric pulse estimated pattern derived from sub-algorithm 400A. FIG. 12c is a parametric pulse estimated pattern derived from sub-algorithm 400B.



FIGS. 13-25 are bladder sonograms that depict image clarity after undergoing image enhancement processing by algorithms described in FIGS. 5-10.



FIG. 13 is an unprocessed image that will undergo image enhancement processing.



FIG. 14 illustrates an enclosed portion of a magnified region of FIG. 13.



FIG. 15 is the resultant image of FIG. 13 that has undergone image processing via nonparametric estimation under sub-algorithm 400A. The low echogenic region within the circle inset has more contrast than the unprocessed image of FIGS. 13 and 14.



FIG. 16 is the resultant image of FIG. 13 that has undergone image processing via parametric estimation under sub-algorithm 400B. Here the circle inset is in the echogenic musculature region encircling the bladder and is shown with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 14.



FIG. 17 the resultant image of an alternate image-processing embodiment using a Weiner filter. Weiner filtration image does not have the clarity nor the contrast in the low echogenic bladder region of FIG. 15 (compare circle insets).



FIG. 18 is another unprocessed image that will undergo image enhancement processing.



FIG. 19 illustrates an enclosed portion of a magnified region of FIG. 18.



FIG. 20 is the resultant image of FIG. 18 that has undergone image processing via nonparametric estimation under sub-algorithm 400A. The low echogenic region is darker and the echogenic regions are brighter with more contrast than the magnified, unprocessed image of FIG. 19.



FIG. 21 is the resultant image of FIG. 18 that has undergone image processing via parametric estimation under sub-algorithm 400B. The low echogenic region is darker and the echogenic regions are brighter with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 19.



FIG. 22 is another unprocessed image that will undergo image enhancement processing.



FIG. 23 illustrates an enclosed portion of a magnified region of FIG. 22.



FIG. 24 is the resultant image of FIG. 22 that has undergone image processing via nonparametric estimation under sub-algorithm 400A. The low echogenic region is darker and the echogenic regions are brighter with more contrast than the magnified, unprocessed image of FIG. 23.



FIG. 25 is the resultant image of FIG. 22 that has undergone image processing via parametric estimation under sub-algorithm 400B. The low echogenic region is darker and the echogenic regions are brighter with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 23.



FIG. 26 depicts a schematic example of a time velocity map derived from sub-algorithm 310.



FIG. 27 depicts another schematic example of a time velocity map derived from sub-algorithm 310.



FIG. 28 illustrates a seven panel image series of a beating heart ventricle that will undergo the optical flow processes of sub-algorithm 300 in which at least two images are required.



FIG. 29 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 presented in a 2D flow pattern after undergoing sub-algorithm 310.



FIG. 30 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the X-axis direction or phi direction after undergoing sub-algorithm 310.



FIG. 31 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the Y-axis direction radial direction after undergoing sub-algorithm 310.



FIG. 32 illustrates a 3D optical vector plot after undergoing sub-algorithm 310 and corresponds to the top row of FIG. 29.



FIG. 33 illustrates a 3D optical vector plot in the phi direction after undergoing sub-algorithm 310 and corresponds to FIG. 30 at threshold value T=1.



FIG. 34 illustrates a 3D optical vector plot in the radial direction after undergoing sub-algorithm 310 and corresponds to FIG. 31 at T=1.



FIG. 35 illustrates a 3D optical vector plot in the radial direction above a Y-axis threshold setting of 0.6 after undergoing sub-algorithm 310 and corresponds to FIG. 34 the threshold T that are less than 0.6 are set to 0.



FIGS. 36A-G depicts embodiments of the sonic gel dispenser.



FIG. 36A illustrates the metered dispensing of sonic gel by calibrated rotation of a compressing wheel. The peristaltic mechanism using the compressing wheel is shown in partial a side view compressing wheel mechanism.



FIG. 36B illustrates in cross-section the inside the dispenser showing a collapsible bag that is engaged by the compressing wheel. As more rotation action is conveyed the compressing wheel, the bag progressively collapses.



FIG. 36C illustrates an alternative embodiment employing compression by hand gripping.



FIG. 36D illustrates an alternative embodiment employing push button or lever compression to dispense metered quantities of sonic gel.



FIG. 36E illustrates an alternative embodiment employing air valves to limit re-gassing of internal sonic gel volume stores within the sonic gel dispenser. The value is pinched closed while when the gripping or compressing wheel pressure is lessened and spring opens when the gripping or compressing wheel pressure is increased to allow sonic gel to be dispensed.



FIG. 36F illustrates a side, cross-sectional view of the gel dispensing system that includes a pre-packaged collapsible bottle with a refill bag, a bottle holder that positions the pre-packaged bottle for use, and a sealed tip that may be clipped open.



FIG. 36G illustrate a side view of the pre-packaged collapsible bottle of FIG. 36F. A particular embodiment includes eight ounce squeeze bottle.



FIGS. 37-46 concern insertion viewed by ultrasonic systems in which the optimization of cannula motion detection during insertion is enhanced with method algorithms directed to detect moving cannula fitted with echogenic ultrasound micro reflectors.


An embodiment related to cannula insertion generally includes an ultrasound probe attached to a first camera and a second camera and a processing and display generating system that is in signal communication with the ultrasound probe, the first camera, and/or the second camera. A user of the system scans tissue containing a target vein using the ultrasound probe and a cross-sectional image of the target vein is displayed. The first camera records a first image of a cannula in a first direction and the second camera records a second image of the cannula in a second direction orthogonal to the first direction. The first and/or the second images are processed by the processing and display generating system along with the relative positions of the ultrasound probe, the first camera, and/or the second camera to determine the trajectory of the cannula. A representation of the determined trajectory of the cannula is then displayed on the ultrasound image.



FIG. 37 is a diagram illustrating a side view of one embodiment of the present invention. A two-dimensional (2D) ultrasound probe 1010 is attached to a first camera 1014 that takes images in a first direction. The ultrasound probe 1010 is also attached to a second camera 1018 via a member 1016. In other embodiments, the member 1016 may link the first camera 1014 to the second camera 1018 or the member 1016 may be absent, with the second camera 1018 being directly attached to a specially configured ultrasound probe. The second camera 1018 is oriented such that the second camera 1018 takes images in a second direction that is orthogonal to the first direction of the images taken by the first camera 1014. The placement of the cameras 1014, 1018 may be such that they can both take images of a cannula 1020 when the cannula 1020 is placed before the cameras 1014, 1018. A needle may also be used in place of a cannula. The cameras 1014, 1018 and the ultrasound probe 1010 are geometrically interlocked such that the cannula 1020 trajectory can be related to an ultrasound image. In FIG. 37, the second camera 1018 is behind the cannula 1020 when looking into the plane of the page. The cameras 1014, 1018 take images at a rapid frame rate of approximately 1030 frames per second. The ultrasound probe 1010 and/or the cameras 1014, 1018 are in signal communication with a processing and display generating system 1061.


First, a user employs the ultrasound probe 1010 and the processing and display generating system 1061 to generate a cross-sectional image of a patient's arm tissue containing a vein to be cannulated (“target vein”) 1019. This could be done by one of the methods disclosed in the related patents and/or patent applications which are herein incorporated by reference, for example. The user then identifies the target vein 1019 in the image using methods such as simple compression which differentiates between arteries and/or veins by using the fact that veins collapse easily while arteries do not. After the user has identified the target vein 1019, the ultrasound probe 1010 is affixed to the patient's arm over the previously identified target vein 19 using a magnetic tape material 1012. The ultrasound probe 1010 and the processing and display generating system 1061 continue to generate a 2D cross-sectional image of the tissue containing the target vein 1019. Images from the cameras 1014, 1018 are provided to the processing and display generating system 1061 as the cannula 1020 is approaching and/or entering the arm of the patient.


The processing and display generating system 1061 locates the cannula 1020 in the images provided by the cameras 1014, 1018 and determines the projected location at which the cannula 1020 will penetrate the cross-sectional ultrasound image being displayed. The trajectory of the cannula 1020 is determined in some embodiments by using image processing to identify bright spots corresponding to micro reflectors previously machined into the shaft of the cannula 1020 or a needle used alone or in combination with the cannula 1020. Image processing uses the bright spots to determine the angles of the cannula 1020 relative to the cameras 1014, 1018 and then generates a projected trajectory by using the determined angles and/or the known positions of the cameras 1014, 1018 in relation to the ultrasound probe 10. In other embodiments, determination of the cannula 1020 trajectory is performed using edge-detection algorithms in combination with the known positions of the cameras 1014, 1018 in relation to the ultrasound probe 1010, for example.


The projected location may be indicated on the displayed image as a computer-generated cross-hair 1066, the intersection of which is where the cannula 1020 is projected to penetrate the image. When the cannula 1020 does penetrate the cross-sectional plane of the scan produced by the ultrasound probe 1010, the ultrasound image confirms that the cannula 1020 penetrated at the location of the cross-hair 1066. This gives the user a real-time ultrasound image of the target vein 1019 with an overlaid real-time computer-generated image of the position in the ultrasound image that the cannula 1020 will penetrate. This allows the user to adjust the location and/or angle of the cannula 1020 before and/or during insertion to increase the likelihood they will penetrate the target vein 1019. Risks of pneumothorax and other adverse outcomes should be substantially reduced since a user will be able to use normal “free” insertion procedures but have the added knowledge of knowing where the cannula 1020 trajectory will lead.



FIG. 38 is a diagram illustrating a top view of the embodiment shown in FIG. 37. It is more easily seen from this view that the second camera 1018 is positioned behind the cannula 1020. The positioning of the cameras 1014, 1018 relative to the cannula 1020 allows the cameras 1014, 1018 to capture images of the cannula 1020 from two different directions, thus making it easier to determine the trajectory of the cannula 1020.



FIG. 39 is diagram showing additional detail for a needle shaft 1022 to be used with one embodiment of the invention. The needle shaft 1022 includes a plurality of micro corner reflectors 1024. The micro corner reflectors 1024 are cut into the needle shaft 1022 at defined intervals Δl in symmetrical patterns about the circumference of the needle shaft 1022. The micro corner reflectors 1024 could be cut with a laser, for example.



FIGS. 40A and 40B are diagrams showing close-up views of surface features of the needle shaft 1022 shown in FIG. 39. FIG. 40A shows a first input ray with a first incident angle of approximately 90° striking one of the micro corner reflectors 1024 on the needle shaft 1022. A first output ray is shown exiting the micro corner reflector 1024 in a direction toward the source of the first input ray. FIG. 40B shows a second input ray with a second incident angle other than 90° striking a micro corner reflector 1025 on the needle shaft 1022. A second output ray is shown exiting the micro corner reflector 1025 in a direction toward the source of the second input ray. FIGS. 40A and 40B illustrate that the micro corner reflectors 1024, 1025 are useful because they tend to reflect an output ray in the direction from which an input ray originated.



FIG. 41 is a diagram showing imaging components for use with the needle shaft 1022 shown in FIG. 39 in accordance with one embodiment of the invention. The imaging components are shown to include a first light source 1026, a second light source 1028, a lens 1030, and a sensor chip 1032. The first and/or second light sources 1026, 1028 may be light emitting diodes (LEDs), for example. In an example embodiment, the light sources 1026, 1028 are infra-red LEDs. Use of an infra-red source is advantageous because it is not visible to the human eye, but when an image of the needle shaft 1022 is recorded, the image will show strong bright dots where the micro corner reflectors 1024 are located because silicon sensor chips are sensitive to infra-red light and the micro corner reflectors 1024 tend to reflect output rays in the direction from which input rays originate, as discussed with reference to FIGS. 40A and 40B. In alternative embodiments, a single light source may be used. Although not shown, the sensor chip 1032 is encased in a housing behind the lens 1030 and the sensor chip 1032 and light sources 1026, 1028 are in electrical communication with the processing and display generating system 1061. The sensor chip 1032 and/or the lens 1030 form a part of the first and second cameras 1014, 1018 in some embodiments. In an example embodiment, the light sources 1026, 1028 are pulsed on at the time the sensor chip 1032 captures an image. In other embodiments, the light sources 1026, 1028 are left on during video image capture.



FIG. 42 is a diagram showing a representation of an image 1034 produced by the imaging components shown in FIG. 41. The image 34 may include a needle shaft image 1036 that corresponds to a portion of the needle shaft 1022 shown in FIG. 41. The image 1034 also may include a series of bright dots 1038 running along the center of the needle shaft image 1036 that correspond to the micro corner reflectors 1024 shown in FIG. 41. A center line 1040 is shown in FIG. 42 to illustrate how an angle theta (θ) could be obtained by image processing to recognize the bright dots 1038 and determine a line through them. The angle theta represents the degree to which the needle shaft 1022 is inclined with respect to a reference line 1042 that is related to the fixed position of the sensor chip 1032.



FIG. 43 is a system diagram of an embodiment of the present invention and shows additional detail for the processing and display generating system 1061 in accordance with an example embodiment of the invention. The ultrasound probe 1010 is shown connected to the processing and display generating system via M control lines and N data lines. The M and N variables are for convenience and appear simply to indicate that the connections may be composed of one or more transmission paths. The control lines allow the processing and display generating system 61 to direct the ultrasound probe 1010 to properly perform an ultrasound scan and the data lines allow responses from the ultrasound scan to be transmitted to the processing and display generating system 1061. The first and second cameras 1014, 1018 are also each shown to be connected to the processing and display generating system 1061 via N lines. Although the same variable N is used, it is simply indicating that one or more lines may be present, not that each device with a label of N lines has the same number of lines.


The processing and display generating system 1061 is composed of a display 1064 and a block 1062 containing a computer, a digital signal processor (DSP), and analog to digital (A/D) converters. As discussed for FIG. 37, the display 1064 will display a cross-sectional ultrasound image. The computer-generated cross hair 66 is shown over a representation of a cross-sectional view of the target vein 1019 in FIG. 43. The cross hair 1066 consists of an x-crosshair 1068 and a z-crosshair 1070. The DSP and the computer in the block 1062 use images from the first camera 1014 to determine the plane in which the cannula 1020 will penetrate the ultrasound image and then write the z-crosshair 1070 on the ultrasound image provided to the display 1064. Similarly, the DSP and the computer in the block 1062 use images from the second camera 1018, which are orthogonal to the images provided by the first camera 1014 as discussed for FIG. 37, to write the x-crosshair 1068 on the ultrasound image.



FIG. 44 is a system diagram of an example embodiment showing additional detail for the block 1062 shown in FIG. 39. The block 1062 includes a first A/D converter 1080, a second A/D converter 1082, and a third A/D converter 1084. The first A/D converter 1080 receives signals from the ultrasound probe 1010 and converts them to digital information that is provided to a DSP 1086. The second and third A/D converters 1082, 1084 receive signals from the first and second cameras 1014, 1018 respectively and convert the signals to digital information that is provided to the DSP 1086. In alternative embodiments, some or all of the A/D converters are not present. For example, video from the cameras 1014, 1018 may be provided to the DSP 1086 directly in digital form rather than being created in analog form before passing through A/D converters 1082, 1084. The DSP 1086 is in data communication with a computer 1088 that includes a central processing unit (CPU) 1090 in data communication with a memory component 1092. The computer 1088 is in signal communication with the ultrasound probe 1010 and is able to control the ultrasound probe 1010 using this connection. The computer 1088 is also connected to the display 64 and produces a video signal used to drive the display 1064.



FIG. 45 is a flowchart of a method of displaying the trajectory of a cannula in accordance with an embodiment of the present invention. First, at a block 1200, an ultrasound image of a vein cross-section is produced and/or displayed. Next, at a block 1210, the trajectory of a cannula is determined. Then, at a block 1220, the determined trajectory of the cannula is displayed on the ultrasound image.



FIG. 46 is a flowchart showing additional detail for the block 1210 depicted in FIG. 45. The block 1210 includes a block 1212 where a first image of a cannula is recorded using a first camera. Next, at a block 1214, a second image of the cannula orthogonal to the first image of the cannula is recorded using a second camera. Then, at a block 1216, the first and second images are processed to determine the trajectory of the cannula.


While the preferred embodiment of the invention has been illustrated and described, as noted above, many changes can be made without departing from the spirit and scope of the invention. For example, a three dimensional ultrasound system could be used rather than a 2D system. In addition, different numbers of cameras could be used along with image processing that determines the cannula 1020 trajectory based on the number of cameras used. The two cameras 1014, 1018 could also be placed in a non-orthogonal relationship so long as the image processing was adjusted to properly determine the orientation and/or projected trajectory of the cannula 1020. Also, an embodiment of the invention could be used for needles and/or other devices which are to be inserted in the body of a patient. Additionally, an embodiment of the invention could be used in places other than arm veins. Regions of the patient's body other than an arm could be used and/or biological structures other than veins may be the focus of interest. As regards ultrasound-based algorithms, alternate embodiments may be configured to image acquisitions other than ultrasound, for example X-ray, visible and infrared light acquired images. Accordingly, the scope of the invention is not limited by the disclosure of the preferred embodiment.

Claims
  • 1. A system to improve image clarity in ultrasound images comprising: an ultrasound transducer connected with a microprocessor configured to collect and process signals of at least one of a fundamental and harmonic ultrasound echoes returning from at least two ultrasound-based images from a scanned region-of-interest based upon a change in pixel movement revealed by information contained in optical flow velocity maps pertaining to the scanned region-of-interest between the at least two ultrasound images, anda computer executable point spread function algorithm having instructions configured to suppress motion induced reverberations attributable to the change in pixel movement,wherein motion sections are compensated with the stationary sections within the scanned region of interest.
  • 2. A method to improve image clarity in ultrasound images comprising: acquiring at least two ultrasound images from a region of interest of a subject derived from at least one of a fundamental and harmonic ultrasound echoes;segmenting the region of interest;determining the motion and still sections of the region of interest; andcompensating for the motion section within the region of interest based upon applying a point spread function algorithm having computer executable instructions to suppress induced reverberations attributable to a change in pixel movement revealed by information contained in optical flow velocity maps pertaining to the region-of-interest between the at least two ultrasound images.
RELATED APPLICATIONS

This application incorporates by reference and claims priority to U.S. provisional patent application Ser. No. 11/625,805 filed Jan. 22, 2007. This application incorporates by reference and claims priority to U.S. provisional patent application Ser. No. 60/882,888 filed Dec. 29, 2006. This application incorporates by reference and claims priority to U.S. provisional patent application Ser. No. 60/828,614 filed Oct. 6, 2006. This application incorporates by reference and claims priority to U.S. provisional patent application Ser. No. 60/760,677 filed Jan. 20, 2006. This application incorporates by reference and claims priority to U.S. provisional patent application Ser. No. 60/778,634 filed Mar. 1, 2006. This application is a continuation-in-part of and claims priority to U.S. patent application Ser. No. 11/213,284 filed Aug. 26, 2005. This application claims priority to and is a continuation-in-part of U.S. patent application Ser. No. 11/119,355 filed Apr. 29, 2005, which claims priority to U.S. provisional patent application Ser. No. 60/566,127 filed Apr. 30, 2004. This application also claims priority to and is a continuation-in-part of U.S. patent application Ser. No. 10/701,955 filed Nov. 5, 2003, which in turn claims priority to and is a continuation-in-part of U.S. patent application Ser. No. 10/443,126 filed May 20, 2003. This application claims priority to and is a continuation-in-part of U.S. patent application Ser. No. 11/061,867 filed Feb. 17, 2005, which claims priority to U.S. provisional patent application Ser. No. 60/545,576 filed Feb. 17, 2004 and U.S. provisional patent application Ser. No. 60/566,818 filed Apr. 30, 2004. This application is also a continuation-in-part of and claims priority to U.S. patent application Ser. No. 11/222,360 filed Sep. 8, 2005. This application is also a continuation-in-part of and claims priority to U.S. patent application Ser. No. 11/061,867 filed Feb. 17, 2005. This application is also a continuation-in-part of and claims priority to U.S. patent application Ser. No. 10/704,966 filed Nov. 10, 2004. This application claims priority to and is a continuation-in-part of U.S. patent application Ser. No. 10/607,919 filed Jun. 27, 2005. This application is a continuation-in-part of and claims priority to PCT application serial number PCT/US03/24368 filed Aug. 1, 2003, which claims priority to U.S. provisional patent application Ser. No. 60/423,881 filed Nov. 5, 2002 and U.S. provisional patent application Ser. No. 60/400,624 filed Aug. 2, 2002. This application is also a continuation-in-part of and claims priority to PCT Application Serial No. PCT/US03/14785 filed May 9, 2003, which is a continuation of U.S. patent application Ser. No. 10/165,556 filed Jun. 7, 2002. This application is also a continuation-in-part of and claims priority to U.S. patent application Ser. No. 10/888,735 filed Jul. 9, 2004. This application is also a continuation-in-part of and claims priority to U.S. patent application Ser. No. 10/633,186 filed Jul. 31, 2003 which claims priority to U.S. provisional patent application Ser. No. 60/423,881 filed Nov. 5, 2002 and to U.S. patent application Ser. No. 10/443,126 filed May 20, 2003 which claims priority to U.S. provisional patent application Ser. No. 60/423,881 filed Nov. 5, 2002 and to U.S. provisional application 60/400,624 filed Aug. 2, 2002. All of the above applications are incorporated by reference in their entirety as if fully set forth herein.

US Referenced Citations (235)
Number Name Date Kind
3613069 Cary, Jr. et al. Oct 1971 A
4431007 Amazeen et al. Feb 1984 A
4556066 Semrow Dec 1985 A
4757821 Snyder Jul 1988 A
4771205 Mequio Sep 1988 A
4821210 Rumbaugh Apr 1989 A
4844080 Frass et al. Jul 1989 A
4926871 Ganguly et al. May 1990 A
5058591 Companion et al. Oct 1991 A
5060515 Kanda et al. Oct 1991 A
5078149 Katsumata et al. Jan 1992 A
5125410 Misono et al. Jun 1992 A
5148809 Biegeleisen-Knight et al. Sep 1992 A
5151856 Halmann et al. Sep 1992 A
5159931 Pini Nov 1992 A
5197019 Delon-Martin et al. Mar 1993 A
5235985 McMorrow et al. Aug 1993 A
5265614 Hayakawa et al. Nov 1993 A
5299577 Brown et al. Apr 1994 A
5381794 Tei et al. Jan 1995 A
5432310 Stoeger Jul 1995 A
5435310 Sheehan et al. Jul 1995 A
5465721 Kishimoto et al. Nov 1995 A
5473555 Potter Dec 1995 A
5487388 Rello et al. Jan 1996 A
5503152 Oakley et al. Apr 1996 A
5503153 Liu et al. Apr 1996 A
5526816 Arditi Jun 1996 A
5553618 Suzuki et al. Sep 1996 A
5575286 Weng et al. Nov 1996 A
5575291 Hayakawa et al. Nov 1996 A
5577506 Dias Nov 1996 A
5588435 Weng et al. Dec 1996 A
5601084 Sheehan et al. Feb 1997 A
5605155 Chalana et al. Feb 1997 A
5615680 Sano Apr 1997 A
5644513 Rudin et al. Jul 1997 A
5645077 Foxlin Jul 1997 A
5697525 O'Reilly et al. Dec 1997 A
5698549 Steers et al. Dec 1997 A
5724101 Haskin Mar 1998 A
5735282 Hossack Apr 1998 A
5738097 Beach et al. Apr 1998 A
5776063 Dittrich et al. Jul 1998 A
5782767 Pretlow, III Jul 1998 A
5806521 Morimoto et al. Sep 1998 A
5841889 Seyed-Bolorforosh Nov 1998 A
5846202 Ramamurthy et al. Dec 1998 A
5851186 Wood et al. Dec 1998 A
5873829 Kamiyama et al. Feb 1999 A
5892843 Zhou et al. Apr 1999 A
5898793 Karron et al. Apr 1999 A
5903664 Hartley et al. May 1999 A
5908390 Matsushima Jun 1999 A
5913823 Hedberg et al. Jun 1999 A
5928151 Hossack et al. Jul 1999 A
5945770 Hanafy Aug 1999 A
5964710 Ganguly et al. Oct 1999 A
5971923 Finger Oct 1999 A
5972023 Tanner et al. Oct 1999 A
5980459 Chiao et al. Nov 1999 A
5993390 Savord et al. Nov 1999 A
6008813 Lauer et al. Dec 1999 A
6030344 Guracar et al. Feb 2000 A
6042545 Hossack et al. Mar 2000 A
6048312 Ishrak et al. Apr 2000 A
6063033 Haider et al. May 2000 A
6064906 Langberg et al. May 2000 A
6071242 Lin Jun 2000 A
6102858 Hatfield et al. Aug 2000 A
6106465 Napolitano et al. Aug 2000 A
6110111 Barnard Aug 2000 A
6117080 Schwartz Sep 2000 A
6122538 Sliwa, Jr. et al. Sep 2000 A
6123669 Kanda Sep 2000 A
6126598 Entrekin et al. Oct 2000 A
6142942 Clark Nov 2000 A
6146330 Tujino et al. Nov 2000 A
6148095 Prause et al. Nov 2000 A
6151404 Pieper Nov 2000 A
6159150 Yale et al. Dec 2000 A
6171248 Hossack et al. Jan 2001 B1
6193657 Drapkin Feb 2001 B1
6200266 Shokrollahi et al. Mar 2001 B1
6210327 Brackett et al. Apr 2001 B1
6213949 Ganguly et al. Apr 2001 B1
6213951 Krishnan et al. Apr 2001 B1
6222948 Hossack et al. Apr 2001 B1
6233480 Hochman et al. May 2001 B1
6238344 Gamelsky et al. May 2001 B1
6248070 Kanda et al. Jun 2001 B1
6254539 Pang et al. Jul 2001 B1
6264609 Herrington et al. Jul 2001 B1
6272469 Koritzinsky et al. Aug 2001 B1
6277073 Bolorforosh et al. Aug 2001 B1
6286513 Au et al. Sep 2001 B1
6302845 Shi et al. Oct 2001 B2
6309353 Cheng et al. Oct 2001 B1
6325758 Carol et al. Dec 2001 B1
6338716 Hossack et al. Jan 2002 B1
6343936 Kaufman et al. Feb 2002 B1
6346124 Geiser et al. Feb 2002 B1
6350239 Ohad et al. Feb 2002 B1
6359190 Ter-Ovanesyan et al. Mar 2002 B1
6360027 Hossack et al. Mar 2002 B1
6375616 Soferman et al. Apr 2002 B1
6400848 Gallagher Jun 2002 B1
6402762 Hunter et al. Jun 2002 B2
6406431 Barnard et al. Jun 2002 B1
6409665 Scott et al. Jun 2002 B1
6440071 Slayton et al. Aug 2002 B1
6440072 Schuman et al. Aug 2002 B1
6443894 Sumanaweera et al. Sep 2002 B1
6468218 Chen et al. Oct 2002 B1
6485423 Angelsen et al. Nov 2002 B2
6491631 Chiao et al. Dec 2002 B2
6494841 Thomas et al. Dec 2002 B1
6503204 Sumanaweera et al. Jan 2003 B1
6511325 Lalka et al. Jan 2003 B1
6511426 Hossack et al. Jan 2003 B1
6511427 Sliwa, Jr. et al. Jan 2003 B1
6515657 Zanelli Feb 2003 B1
6524249 Moehring et al. Feb 2003 B2
6535759 Epstein et al. Mar 2003 B1
6540679 Slayton et al. Apr 2003 B2
6544179 Schmiesing et al. Apr 2003 B1
6545678 Ohazama Apr 2003 B1
6551246 Ustuner et al. Apr 2003 B1
6565512 Ganguly et al. May 2003 B1
6569097 McMorrow et al. May 2003 B1
6569101 Quistgaard et al. May 2003 B2
6575907 Soferman et al. Jun 2003 B1
6585647 Winder Jul 2003 B1
6610013 Fenster et al. Aug 2003 B1
6611141 Schulz et al. Aug 2003 B1
6622560 Song et al. Sep 2003 B2
6628743 Drummond et al. Sep 2003 B1
6643533 Knoplioch et al. Nov 2003 B2
6650927 Keidar Nov 2003 B1
6676605 Barnard et al. Jan 2004 B2
6682473 Matsuura et al. Jan 2004 B1
6688177 Wiesauer Feb 2004 B2
6695780 Nahum et al. Feb 2004 B1
6705993 Ebbini et al. Mar 2004 B2
6716175 Geiser et al. Apr 2004 B2
6752762 DeJong et al. Jun 2004 B1
6755787 Hossack et al. Jun 2004 B2
6768811 Dinstein et al. Jul 2004 B2
6780152 Ustuner et al. Aug 2004 B2
6788620 Shiraishi et al. Sep 2004 B2
6801643 Pieper Oct 2004 B2
6822374 Smith et al. Nov 2004 B1
6825838 Smith et al. Nov 2004 B2
6831394 Baumgartner et al. Dec 2004 B2
6868594 Gururaja Mar 2005 B2
6884217 McMorrow et al. Apr 2005 B2
6903813 Jung et al. Jun 2005 B2
6905467 Bradley et al. Jun 2005 B2
6905468 McMorrow et al. Jun 2005 B2
6911912 Roe Jun 2005 B2
6936009 Venkataramani et al. Aug 2005 B2
6939301 Abdelhak Sep 2005 B2
6951540 Ebbini et al. Oct 2005 B2
6954406 Jones Oct 2005 B2
6961405 Scherch Nov 2005 B2
6962566 Quistgaard et al. Nov 2005 B2
6970091 Roe Nov 2005 B2
7004904 Chalana et al. Feb 2006 B2
7025725 Dione et al. Apr 2006 B2
7041059 Chalana et al. May 2006 B2
7042386 Woodford et al. May 2006 B2
7087022 Chalana et al. Aug 2006 B2
7141020 Poland et al. Nov 2006 B2
7142905 Slayton et al. Nov 2006 B2
7177677 Kaula et al. Feb 2007 B2
7189205 McMorrow et al. Mar 2007 B2
7201715 Burdette et al. Apr 2007 B2
7215277 Woodford et al. May 2007 B2
7255678 Mehi et al. Aug 2007 B2
7301636 Jung et al. Nov 2007 B2
7382907 Luo et al. Jun 2008 B2
7450746 Yang et al. Nov 2008 B2
7520857 Chalana et al. Apr 2009 B2
7611466 Chalana et al. Nov 2009 B2
20010031920 Kaufman et al. Oct 2001 A1
20020005071 Song et al. Jan 2002 A1
20020016545 Quistgaard et al. Feb 2002 A1
20020072671 Chenal et al. Jun 2002 A1
20020102023 Yamauchi et al. Aug 2002 A1
20020133075 Abdelhak Sep 2002 A1
20020147399 Mao et al. Oct 2002 A1
20020165448 Ben-Haim et al. Nov 2002 A1
20030055336 Buck et al. Mar 2003 A1
20030142587 Zeitzew Jul 2003 A1
20030174872 Chalana et al. Sep 2003 A1
20030181806 Medan et al. Sep 2003 A1
20030216646 Angelsen et al. Nov 2003 A1
20030229281 Barnard et al. Dec 2003 A1
20040006266 Ustuner et al. Jan 2004 A1
20040024302 Chalana et al. Feb 2004 A1
20040034305 Song et al. Feb 2004 A1
20040054280 McMorrow et al. Mar 2004 A1
20040076317 Roberts Apr 2004 A1
20040106869 Tepper Jun 2004 A1
20040127796 Chalana et al. Jul 2004 A1
20040127797 Barnard et al. Jul 2004 A1
20040267123 McMorrow et al. Dec 2004 A1
20050135707 Turek et al. Jun 2005 A1
20050174324 Liberty et al. Aug 2005 A1
20050193820 Sheljaskow et al. Sep 2005 A1
20050212757 Marvit et al. Sep 2005 A1
20050215896 McMorrow et al. Sep 2005 A1
20050228276 He et al. Oct 2005 A1
20050240126 Foley et al. Oct 2005 A1
20050253806 Liberty et al. Nov 2005 A1
20060025689 Chalana et al. Feb 2006 A1
20060064010 Cannon, Jr. et al. Mar 2006 A1
20060078501 Goertz et al. Apr 2006 A1
20060079775 McMorrow et al. Apr 2006 A1
20060111633 McMorrow et al. May 2006 A1
20060235301 Chalana et al. Oct 2006 A1
20070004983 Chalana et al. Jan 2007 A1
20070232908 Wang et al. Oct 2007 A1
20070276247 Chalana et al. Nov 2007 A1
20070276254 Yang et al. Nov 2007 A1
20080139938 Yang et al. Jun 2008 A1
20080146932 Chalana et al. Jun 2008 A1
20080242985 Chalana et al. Oct 2008 A1
20080249414 Yang et al. Oct 2008 A1
20080262356 Chalana et al. Oct 2008 A1
20090062644 McMorrow et al. Mar 2009 A1
20090088660 McMorrow et al. Apr 2009 A1
20090105585 Wang et al. Apr 2009 A1
20090112089 Barnard et al. Apr 2009 A1
20090264757 Yang et al. Oct 2009 A1
Foreign Referenced Citations (11)
Number Date Country
0 271 214 Jun 1988 EP
1 030 187 Aug 2000 EP
1 076 318 Feb 2001 EP
2 391 625 Feb 2004 GB
7-171149 Jul 1995 JP
2000-210286 Aug 2000 JP
2000-126178 Sep 2000 JP
2000-126181 Sep 2000 JP
2000-126182 Sep 2000 JP
0135339 May 2001 WO
2009032778 Mar 2009 WO
Related Publications (1)
Number Date Country
20070232908 A1 Oct 2007 US
Provisional Applications (9)
Number Date Country
60882888 Dec 2006 US
60828614 Oct 2006 US
60760677 Jan 2006 US
60778634 Mar 2006 US
60566127 Apr 2004 US
60545576 Feb 2004 US
60566818 Apr 2004 US
60423881 Nov 2002 US
60400624 Aug 2002 US
Continuations (2)
Number Date Country
Parent 10165556 Jun 2002 US
Child PCT/US03/14785 US
Parent 11680380 US
Child PCT/US03/14785 US
Continuation in Parts (14)
Number Date Country
Parent 11213284 Aug 2005 US
Child 11680380 US
Parent 11119355 Apr 2005 US
Child 11213284 US
Parent 10701955 Nov 2003 US
Child 11119355 US
Parent 10443126 May 2003 US
Child 10701955 US
Parent 11680380 US
Child 10701955 US
Parent 11061867 Feb 2005 US
Child 11680380 US
Parent 11222360 Sep 2005 US
Child 11061867 US
Parent 11061867 Feb 2005 US
Child 11222360 US
Parent 10704966 Nov 2004 US
Child 11061867 US
Parent 10607919 Jun 2005 US
Child 10704966 US
Parent PCT/US03/24368 Aug 2003 US
Child 10607919 US
Parent PCT/US03/14785 May 2003 US
Child PCT/US03/24368 US
Parent 10888735 Jul 2004 US
Child 11680380 US
Parent 10633186 Jul 2003 US
Child 10888735 US