The present disclosure relates generally to methods of generating three-dimensional virtual models of musculoskeletal systems and, more particularly, to three-dimensional bone and soft tissue model reconstruction, and associated apparatus.
The present disclosure contemplates that three-dimensional (“3-D”) models of anatomical structures, such as musculoskeletal systems (e.g., bones, ligaments, tendons, and/or cartilage), may be used in connection with diagnosis and/or treatment involving such musculoskeletal systems. For example, 3-D bone models may be used in connection with orthopedic surgery, such as for preoperative planning, intraoperative surgical navigation, intraoperative bone preparation, and/or postoperative assessment.
The present disclosure contemplates that various imaging modalities which may be used in connection with anatomical structures may be associated with certain potential advantages and/or potential disadvantages. For example, in the field of orthopedics, ultrasound imaging may facilitate highly accurate 3-D surface mapping and may not expose the patient or nearby persons to ionizing radiation. However, ultrasound may generally be limited to imaging the exterior features of bones. More specifically, ultrasound may be limited in its ability to image certain anatomical structures, such as internal features of bones and/or external features of bones that are occluded by other bones. As another example, X-ray imaging and/or fluoroscopic imaging may allow visualization of internal features of bones and/or portions of bones that are occluded by other bones. However, these modalities may expose the patient and nearby persons to ionizing radiation. Additionally, most common X-ray and fluoroscopic imaging techniques provide only two-dimensional imaging.
Accordingly, there is a need for improved methods and apparatuses associated with 3-D imaging of musculoskeletal features.
It is an aspect of the present disclosure to provide a method of generating a virtual 3-D patient-specific bone model, including obtaining a preliminary virtual 3-D bone model of a first bone; obtaining a supplemental image of the first bone; registering the preliminary virtual 3-D bone model of the first bone with the supplemental image of the first bone; extracting geometric information about the first bone from the supplemental image of the first bone; and/or generating a refined virtual 3-D patient-specific bone model of the first bone by refining the preliminary virtual 3-D bone model of the first bone using the geometric information about the first bone from the supplemental image of the first bone.
In a detailed embodiment, obtaining the preliminary 3-D bone model may include obtaining a point cloud of the first bone and reconstructing the preliminary 3-D bone model by morphing a generalized 3-D bone model using the point cloud of the first bone. Obtaining the point cloud of the first bone may utilize a first imaging modality. Obtaining the supplemental image of the first bone may utilize a second imaging modality. The first imaging modality may be different than the second imaging modality. The first imaging modality may include ultrasound. The second imaging modality may include 2-D X-ray. The second imaging modality may include 2-D X-ray.
In a detailed embodiment, the supplemental image of the first bone may include at least one portion of the first bone that was not included in the point cloud of the first bone. Obtaining the point cloud of the first bone may include performing an ultrasound scan of the first bone. Obtaining the supplemental image of the first bone may include obtaining a 2-D X-ray of the first bone. The 2-D X-ray of the first bone may include at least one portion of the first bone that was not available from the ultrasound scan of the first bone. The at least one portion of the first bone that was not available from the ultrasound scan of the first bone may have been at least partially occluded from ultrasound scanning by an anatomical structure.
In a detailed embodiment, the at least one portion of the first bone that was not available from the ultrasound scan of the first bone may include an internal structure of the first bone. The occluded internal structure of the first bone may include a medullary canal. The first bone may include a femur. The medullary canal may include the femoral medullary canal.
In a detailed embodiment, the at least one portion of the first bone that was not visible on the ultrasound scan of the first bone may include an external structure of the first bone. The external structure of the first bone may have been at least partially occluded from ultrasound scanning by a second bone. One of the first bone and the second bone may include a femoral head and the other of the first bone and the second bone may include an acetabular cup.
In a detailed embodiment, the external structure of the first bone that was occluded from ultrasound scanning by the second bone may include a soft tissue. The soft tissue may include cartilage. The cartilage may include hip articular cartilage. The cartilage may include knee articular cartilage. The cartilage may include shoulder articular cartilage.
In a detailed embodiment, each of the first bone and the second bone may include one or more of a pelvis, a femur, a tibia, a patella, a scapula, and a humerus.
In a detailed embodiment, the first bone may include one or more of a pelvis, a femur, a tibia, a patella, a scapula, and a humerus.
In a detailed embodiment, registering the preliminary 3-D bone model of the first bone with the supplemental image of the first bone may include solving for a pose of the preliminary 3-D bone model which produces a 2-D projection corresponding to a projection of the supplemental image.
In a detailed embodiment, obtaining a supplemental image of the first bone may include obtaining a plurality of supplemental images of the first bone. Registering the preliminary virtual 3-D bone model of the first bone with the supplemental image of the first bone may include registering the preliminary virtual 3-D bone model of the first bone with the plurality of supplemental images of the first bone. Extracting geometric information about the first bone from the supplemental images of the first bone may include extracting geometric information about the first bone from the plurality of supplemental images of the first bone. Refining the preliminary virtual 3-D bone model of the first bone using the geometric information about the first bone from the supplemental image of the first bone may include refining the preliminary virtual 3-D bone model of the first bone using the geometric information about the first bone from the plurality of supplemental images of the first bone.
In a detailed embodiment, the method may include obtaining a preliminary virtual 3-D bone model of a second bone; obtaining a supplemental image of the second bone; registering the preliminary virtual 3-D bone model of the second bone with the supplemental image of the second bone; extracting geometric information about the second bone from the supplemental image of the second bone; and/or generating a refined virtual 3-D patient-specific bone model of the second bone by refining the preliminary virtual 3-D bone model of the second bone using the geometric information about the second bone from the supplemental image of the second bone.
In a detailed embodiment, obtaining the point cloud of the second bone may include performing an ultrasound scan of the second bone. Obtaining the supplemental image of the second bone may include obtaining a 2-D X-ray of the second bone. The 2-D X-ray of the second bone may include at least one portion of the second bone that was not visible on the ultrasound scan of the second bone.
In a detailed embodiment, extracting geometric information from the supplemental image of the first bone may include extracting at least one of a length dimension, an angular dimension, or a curvature of the first bone from the supplemental image of the first bone.
In a detailed embodiment, a method of preoperatively sizing an orthopedic implant may include generating the refined virtual 3-D patient-specific bone model according to the method described above; and/or sizing an orthopedic implant using the refined virtual 3-D patient-specific bone model.
In a detailed embodiment, an apparatus may be configured to perform the method described above. In a detailed embodiment, a memory may include instructions that, when executed by a processor, cause the processor to perform the method described above.
It is an aspect of the present disclosure to provide a method of generating a virtual 3-D patient-specific bone model, including obtaining ultrasound data pertaining to an exterior surface of a first bone; obtaining X-ray data pertaining to at least one of an internal feature of the first bone and/or an occluded feature of the first bone; and/or generating a 3-D patient-specific bone model of the first bone using the ultrasound data and the X-ray data, the 3-D patient-specific bone model representing the exterior surface of the first bone and the at least one of the internal feature of the first bone and/or the occluded feature of the first bone.
In a detailed embodiment, obtaining the ultrasound data pertaining to the exterior surface of the first bone may include obtaining an ultrasound point cloud of the exterior surface of the first bone and generating a preliminary 3-D bone model of the first bone. Generating the 3-D patient-specific bone model of the first bone may include refining the preliminary 3-D bone model of the first bone using the X-ray data.
In a detailed embodiment, an apparatus may be configured to perform the method described above. In a detailed embodiment, a memory may include instructions that, when executed by a processor, cause the processor to perform the method described above.
It is an aspect of the present disclosure to provide a method of determining a spine-pelvis tilt, including obtaining a virtual 3-D model of a pelvis; obtaining a first ultrasound point cloud of the pelvis and a first ultrasound point cloud of a lumbar spine with the pelvis and the lumbar spine in a first functional position; registering the virtual 3-D model of the pelvis to the first point cloud of the pelvis; and/or determining a first spine-pelvis tilt in the first functional position using a first relative angle of the first point cloud of the lumbar spine to the 3-D model of the pelvis.
In a detailed embodiment, the method may include positioning at least one of the pelvis or the lumbar spine into the first functional position.
In a detailed embodiment, the method may include obtaining a second ultrasound point cloud of the pelvis and a second ultrasound point cloud of the lumbar spine with the pelvis and the spine in a second functional position; registering the virtual 3-D model of the pelvis to the second point cloud of the pelvis; and/or determining a second spine-pelvis tilt in the second functional position using a second relative angle of the second point cloud of the lumbar spine to the 3-D model of the pelvis.
In a detailed embodiment, the method may include positioning at least one of the pelvis or the lumbar spine into the second functional position.
In a detailed embodiment, the method may include obtaining a third ultrasound point cloud of the pelvis and a third ultrasound point cloud of the lumbar spine with the pelvis and the lumbar spine in a third functional position; registering the virtual 3-D model of the pelvis to the third point cloud of the pelvis; and/or determining a third spine-pelvis tilt in the third functional position using a third relative angle of the third point cloud of the lumbar spine to the 3-D model of the pelvis.
In a detailed embodiment, the method may include positioning at least one of the pelvis or the lumbar spine into the third functional position.
In a detailed embodiment, each of first functional position, the second functional position, and the third functional position may include one of a sitting position, a standing position, and/or a supine position.
In a detailed embodiment, obtaining the virtual 3-D model of the pelvis may include generating the virtual 3-D model of the pelvis using ultrasound.
In a detailed embodiment, obtaining the first ultrasound point cloud of the pelvis and the first ultrasound point cloud of the lumbar spine in the first functional position may include obtaining a sparse ultrasound point cloud of the pelvis and a sparse ultrasound point cloud of the lumbar spine.
In a detailed embodiment, at least one of the first ultrasound point cloud of the pelvis and the first ultrasound point cloud of the lumbar spine with the pelvis and the lumbar spine in the first functional position may include additional points pertaining to a femur. The method may include determining at least one of a femoral version, an acetabular version, or a combined version. Determining the at least one of the femoral version, the acetabular version, and/or the combined version may include identifying an transepicondylar axis or a posterior condylar axis of the femur to determine a femoral version angle reference axis.
In a detailed embodiment, the method may include obtaining information pertaining to a leg length by obtaining data from at least one X-ray taken with the subject in a standing position.
In a detailed embodiment, an apparatus may be configured to perform the method described above. In a detailed embodiment, a memory may include instructions that, when executed by a processor, cause the processor to perform the method described above.
It is an aspect of the present disclosure to provide a method of generating a virtual 3-D patient-specific model of a ligament, including obtaining a virtual 3-D patient-specific bone model of a joint; detecting at least one ligament loci on the virtual 3-D patient-specific bone model; obtaining ultrasound data pertaining to a ligament associated with the at least one ligament loci by scanning, using ultrasound, the ligament; and/or reconstructing a virtual 3-D model of the ligament using the ultrasound data.
In a detailed embodiment, obtaining the ultrasound data pertaining to the ligament may be performed at a plurality of joint angles of the joint across the joint's range of motion.
In a detailed embodiment, obtaining the virtual 3-D patient-specific bone model of the joint may include reconstructing the joint using ultrasound. Reconstructing the joint using ultrasound may include obtaining at least one point cloud associated with one or more bones of the joint.
In a detailed embodiment, detecting the at least one ligament loci on the patient-specific virtual 3-D bone model may include determining at least one insertion location of the ligament.
In a detailed embodiment, scanning, using ultrasound, the ligament may include providing automated guidance information. Providing the automated guidance information may include providing a display comprising a current position of an ultrasound probe relative to one or more anatomical structures. Providing the automated guidance information may include providing a display comprising an indication of a desired location or direction of scanning.
Providing the automated guidance information may include providing a display comprising an A-mode or B-mode ultrasound image.
In a detailed embodiment, the joint may include a knee. The ligament may include a medial collateral ligament.
In a detailed embodiment, the joint may include a knee. The ligament may include a lateral collateral ligament.
In a detailed embodiment, an apparatus may be configured to perform the method described above. In a detailed embodiment, a memory may include instructions that, when executed by a processor, cause the processor to perform the method described above.
It is an aspect of the present disclosure to provide a method of generating a virtual 3-D patient-specific anatomical model, including obtaining a preliminary virtual 3-D anatomy model of a first anatomy; obtaining a supplemental image of the first anatomy; registering the preliminary virtual 3-D anatomy model of the first anatomy with the supplemental image of the first anatomy; extracting geometric information about the first anatomy from the supplemental image of the first anatomy; and/or generating a refined virtual 3-D patient-specific anatomy model of the first anatomy by refining the preliminary virtual 3-D anatomy model of the first anatomy using the geometric information about the first anatomy from the supplemental image of the first anatomy.
In a detailed embodiment, obtaining the preliminary 3-D anatomy model may include obtaining a point cloud of the first anatomy and reconstructing the preliminary 3-D anatomy model by morphing a generalized 3-D anatomy model using the point cloud of the first anatomy. Obtaining the point cloud of the first anatomy may utilize a first imaging modality. Obtaining the supplemental image of the first anatomy may utilize a second imaging modality. The first imaging modality may be different than the second imaging modality.
In a detailed embodiment, the first imaging modality may include ultrasound. The second imaging modality may include 2-D X-ray. The second imaging modality may include 2-D X-ray.
In a detailed embodiment, the supplemental image of the first anatomy may include at least one portion of the first anatomy that was not included in the point cloud of the first anatomy. Obtaining the point cloud of the first anatomy may include performing an ultrasound scan of the first anatomy. Obtaining the supplemental image of the first anatomy may include obtaining a 2-D X-ray of the first anatomy. The 2-D X-ray of the first anatomy may include at least one portion of the first anatomy that was not available from the ultrasound scan of the first anatomy. The at least one portion of the first anatomy that was not available from the ultrasound scan of the first anatomy may have been at least partially occluded from ultrasound scanning by an anatomical structure.
In a detailed embodiment, the at least one portion of the first anatomy that was not available from the ultrasound scan of the first anatomy may include an internal structure of the first anatomy. The occluded internal structure of the first anatomy may include a medullary canal. The first anatomy may include a femur. The medullary canal may include the femoral medullary canal.
In a detailed embodiment, the at least one portion of the first anatomy that was not visible on the ultrasound scan of the first anatomy may include an external structure of the first anatomy. The external structure of the first anatomy may have been at least partially occluded from ultrasound scanning by a second anatomy. One of the first anatomy and the second anatomy may include a femoral head and the other of the first anatomy and the second anatomy may include an acetabular cup.
In a detailed embodiment, the external structure of the first anatomy that was occluded from ultrasound scanning by the second anatomy may include a soft tissue. The soft tissue may include cartilage. The cartilage may include hip articular cartilage. The cartilage may include knee articular cartilage. The cartilage may include shoulder articular cartilage.
In a detailed embodiment, each of the first anatomy and the second anatomy may include one or more of a pelvis, a femur, a tibia, a patella, a scapula, and/or a humerus.
In a detailed embodiment, the first anatomy may include one or more of a pelvis, a femur, a tibia, a patella, a scapula, and/or a humerus.
In a detailed embodiment, registering the preliminary 3-D anatomy model of the first anatomy with the supplemental image of the first anatomy may include solving for a pose of the preliminary 3-D anatomy model which produces a 2-D projection corresponding to a projection of the supplemental image.
In a detailed embodiment, obtaining a supplemental image of the first anatomy may include obtaining a plurality of supplemental images of the first anatomy. Registering the preliminary virtual 3-D anatomy model of the first anatomy with the supplemental image of the first anatomy may include registering the preliminary virtual 3-D anatomy model of the first anatomy with the plurality of supplemental images of the first anatomy. Extracting geometric information about the first anatomy from the supplemental images of the first anatomy may include extracting geometric information about the first anatomy from the plurality of supplemental images of the first anatomy. Refining the preliminary virtual 3-D anatomy model of the first anatomy using the geometric information about the first anatomy from the supplemental image of the first anatomy may include refining the preliminary virtual 3-D anatomy model of the first anatomy using the geometric information about the first anatomy from the plurality of supplemental images of the first anatomy.
In a detailed embodiment, the method may include obtaining a preliminary virtual 3-D anatomy model of a second anatomy; obtaining a supplemental image of the second anatomy; registering the preliminary virtual 3-D anatomy model of the second anatomy with the supplemental image of the second anatomy; extracting geometric information about the second anatomy from the supplemental image of the second anatomy; and/or generating a refined virtual 3-D patient-specific anatomy model of the second anatomy by refining the preliminary virtual 3-D anatomy model of the second anatomy using the geometric information about the second anatomy from the supplemental image of the second anatomy. Obtaining the point cloud of the second anatomy may include performing an ultrasound scan of the second anatomy. Obtaining the supplemental image of the second anatomy may include obtaining a 2-D X-ray of the second anatomy. The 2-D X-ray of the second anatomy may include at least one portion of the second anatomy that was not visible on the ultrasound scan of the second anatomy.
In a detailed embodiment, extracting geometric information from the supplemental image of the first anatomy may include extracting at least one of a length dimension, an angular dimension, or a curvature of the first anatomy from the supplemental image of the first anatomy.
In a detailed embodiment, a method of preoperatively sizing an orthopedic implant may include generating the refined virtual 3-D patient-specific anatomy model according to the method described above; and/or sizing an orthopedic implant using the refined virtual 3-D patient-specific anatomy model.
In a detailed embodiment, an apparatus may be configured to perform the method described above. In a detailed embodiment, a memory may include instructions that, when executed by a processor, cause the processor to perform the method described above.
In a detailed embodiment, the first anatomy may include a first bone.
It is an aspect of the present disclosure to provide any method, process, device, apparatus, or system associated with any aspect or embodiment described above, or as described herein. It is an aspect of the present disclosure to provide any combination of any elements of any of the preceding aspects or embodiments, or as described herein.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present disclosure and, together with the detailed description of the exemplary embodiments given below, serve to explain the principles of the present disclosure.
The present disclosure includes, among other things, methods and apparatuses associated with creation of virtual models of anatomical structures, such as generation of 3-D models of musculoskeletal features. Some example embodiments according to at least some aspects of the present disclosure are described and illustrated below to encompass devices, methods, and techniques relating to generation of virtual musculoskeletal models using multiple imaging modalities, such as ultrasound and X-ray imaging. Of course, it will be apparent to those of ordinary skill in the art that the embodiments discussed below are examples and may be reconfigured and combined without departing from the scope and spirit of the present disclosure. It is also to be understood that variations of the example embodiments contemplated by one of ordinary skill in the art shall concurrently comprise part of the instant disclosure. However, for clarity and precision, the example embodiments as discussed below may include optional steps, methods, and features that one of ordinary skill should recognize as not being a requisite to fall within the scope of the present disclosure.
Some example embodiments according to at least some aspects of the present disclosure may utilize ultrasound imaging in connection with reconstruction of 3-D models of anatomical structures. Accordingly, the following section provides a description of exemplary methods and apparatus for reconstructing 3-D models of joints (e.g., bones and/or soft tissues) using ultrasound.
3-D Reconstruction of Joints Using Ultrasound
The reconstruction of a 3-D model for joint, such as the articulating bones of a knee, is a key component of computer-aided joint surgery systems. The existence of a pre-operatively acquired model enables the surgeon to pre-plan a surgery by choosing the proper implant size, providing the femoral and tibial cutting planes in the case of knee surgery, and evaluating the fit of the chosen implant. The conventional method of generating the 3-D model is segmentation of computed tomography (“CT”), or magnetic resonance imaging (“MRI”) scans, which are the conventional imaging modalities for creating 3-D patient-specific bone models. The segmentation methods used are either manual, semi-automated, or fully automated. Although these methods produce highly accurate models, CT and MRI have inherent drawbacks, i.e., both are fairly expensive procedures (especially for the MRI), and CT exposes the patient to ionizing radiation.
One alternative method of forming 3-D patient-specific models is the use of previously acquired X-ray images as a priori information to guide the morphing of a generalized bone model whose projection matches the X-ray images. Several X-ray based model reconstruction methodologies have been developed for the femur (including, specifically, the proximal and distal portions), the pelvis, the spine, and the rib cage.
Conventional ultrasound imaging utilizes B-mode images. B-mode images are constructed by extracting an envelope of received scanned lines of radiofrequency (“RF”) signals using the Hilbert transformation. These envelopes are then decimated (causing a drop in the resolution) and converted to grayscale (intensity of each pixel is represented by 8 bits) to form the final B-mode image. The conversion to grayscale results in a drop in the dynamic range of the ultrasound data.
The use of ultrasound in computer aided orthopedic surgery has gained interest in the recent decade due to its relatively low cost and radiation-free nature. More particularly, A-mode ultrasound intra-operative registration has been used for computer aided orthopedic surgery and, in limited cases, in neurosurgery. Ultrasound-MRI registration has been developed utilizing B-mode ultrasound images. However, it has proven difficult to generate 3-D bone models having sufficient quality using conventional ultrasound technology due to limitations in the quality of the images.
Therefore, there is a need to develop improved apparatuses and methods that utilized ultrasound techniques to construct 3-D patient-specific bone and cartilage models.
The present invention overcomes the foregoing problems and other shortcomings, drawbacks, and challenges of high cost or high radiation exposure imaging modalities to generate a patient-specific model by ultrasound techniques. While the present invention will be described in connection with certain embodiments, it will be understood that the present invention is not limited to these embodiments. To the contrary, this invention includes all alternatives, modifications, and equivalents as may be included within the spirit and scope of the present invention.
In accordance with one embodiment of the present invention, a method of generating a 3-D patient-specific bone model is described. The method includes acquiring a plurality of raw radiofrequency (“RF”) signals from an A-mode ultrasound scan of the bone, which is spatially tracked in 3-D space. The bone contours are isolated in each of the plurality of RF signals and transformed into a point cloud. A 3-D patient-specific model of the bone is then optimized with respect to the point cloud.
According to another embodiment of the present invention, a method for 3-D reconstruction of a bone surface includes imaging the bone with A-mode ultrasound. A plurality of RF signals is acquired while imaging. Imaging of the bone is also tracked. A bone contour is extracted from each of the plurality of RF signals. Then, using the tracked data and the extracted bone contours, a point cloud representing the surface of the bone is generated. A generalized model of the bone is morphed to match the surface of the bone as represented by the point cloud.
In yet another embodiment of the present invention, a computer method for simulating a surface of a bone is described. The computer method includes executing a computer program in accordance with a process. The process includes extracting a bone contour from each of a plurality of A-mode RF signals. The extracted bone contours are transformed from a local frame of reference into a point cloud in a world-frame of reference. A generalized model of the bone is compared with the point cloud and, as determined from the comparing, the generalized model is deformed to match the point cloud.
Another embodiment of the present invention is directed to a computer program product that includes a non-transitory computer readable medium and program instructions stored on the computer readable medium. The program instructions, when executed by a process, cause the computer program product to isolate a bone contour from a plurality of RF signals. The plurality of RF signals being previously acquired from a reflected A-mode ultrasound beam. The bone contours are then transformed into a point cloud and used to optimize a 3-D model of the bone.
Still another embodiment of the present invention is directed to a computing device having a processor and a memory. The memory includes instructions that, when executed by the processor, cause the processor to isolate a bone contour from a plurality of RF signals. The plurality of RF signals being previously acquired from a reflected A-mode ultrasound beam. The bone contours are then transformed into a point cloud and used to optimize a 3-D model of the bone.
The various embodiments of the present invention are directed to methods of generating a 3-D patient-specific bone model. To generate the 3-D patient-specific model, a plurality of raw RF signals is acquired using A-mode ultrasound acquisition methodologies. A bone contour is then isolated in each of the plurality of RF signals and transformed into a point cloud. The point clouds may then be used to optimize a 3-D model of the bone such that the patient-specific model may be generated. Although the various embodiments of the invention are shown herein with respect to a human patient, persons having ordinary skill in the art will understand that embodiments of the invention may also be used to generate 3-D patient-specific bone models of animals (e.g., dogs, horses, etc.) such as for veterinarian applications.
Turning now to the figures, and in particular to
The at least one ultrasound probe 60 is configured to acquire ultrasound raw radiofrequency (“RF”) signals, and is shown in greater detail in
The computer 54 of the ultrasound instrument 50, as shown in
The computer 54 typically includes at least one processing unit 78 (illustrated as “CPU”) coupled to a memory 80 along with several different types of peripheral devices, e.g., a mass storage device 82, the user interface 84 (illustrated as “User 1/F,” which may include the input device 56 and the monitor 58), the Network 1/F 76, and an Input/Output (10) interface 85 for coupling the computer 54 to additional equipment, such as the aforementioned ultrasound instrument 50. The memory 80 may include dynamic random access memory (“DRAM”), static random access memory (“SRAM”), non-volatile random access memory (“NVRAM”), persistent memory, flash memory, at least one hard disk drive, and/or another digital storage medium. The mass storage device 82 is typically at least one hard disk drive and may be located externally to the computer 54, such as in a separate enclosure or in one or more of the networked computers 70, one or more of the networked storage devices 72 (for example, a server).
The CPU 78 may be, in various embodiments, a single-thread, multi-threaded, multi-core, and/or multi-element processing unit (not shown). In alternative embodiments, the computer 54 may include a plurality of processing units that may include single-thread processing units, multi-threaded processing units, multi-core processing units, multi-element processing units, and/or combinations thereof. Similarly, the memory 80 may include one or more levels of data, instruction, and/or combination caches, with caches serving the individual processing unit or multiple processing units (not shown).
The memory 80 of the computer 54 may include an operating system 81 (illustrated as “OS”) to control the primary operation of the computer 54 in a manner known in the art. The memory 80 may also include at least one application, component, algorithm, program, object, module, or sequence of instructions, or even a subset thereof, will be referred to herein as “computer program code” or simply “program code” 83. Program code 83 typically comprises one or more instructions that are resident at various times in the memory 80 and/or the mass storage device 82 of the computer 54, and that, when read and executed by the CPU 78, causes the computer 54 to perform the steps necessary to execute steps or elements embodying the various aspects of the present invention.
The I/O interface 85 is configured to operatively couple the CPU 78 to other devices and systems, including the ultrasound instrument 50 and an optional electromagnetic tracking system 87 (
Those skilled in the art will recognize that the environment illustrated in
Returning again to
The optical tracking marker 86 is operably coupled to a position sensor 88, one embodiment of which is shown in
The optical tracking marker 86 is rigidly attached to the ultrasound probe 60 and is provided a local coordinate frame of reference (“local frame” 92). Additionally, the ultrasound probe 60 is provided another local coordinate frame of reference (“ultrasound frame”). For the sake of convenience, the combination optical tracking marker 86 with the ultrasound probe 60 is referred to as the “hybrid probe” 94. The position sensor 88, positioned away from the hybrid probe 94, determines a fixed world coordinate frame (“world frame”). Operation of the optical tracking system (the optical tracking marker 86 with the position sensor 88) with the ultrasound probe 60, once calibrated, is configured to determine a transformation between the local and ultrasound coordinate frames.
Turning now to
The hybrid probe is held in a fixed position while the position sensor 88 optical camera acquires a number of position points, including, for example: Ptrans1, i.e., a first end of the transducer array 68; Ptrans2, i.e., a second end of the transducer array 68; and Po, i.e., a point on the transducer array 68 that is not collinear with Ptrans1 and Ptrans2 (Block 104). The homogeneous transformation between OP and W, TOPW, is then recorded (Block 106). The plurality of calibration parameters are then calculated (Block 108) from the measured number of points and the transformation, TOPW, as follows:
With the plurality of calibration parameters determined, the hybrid probe 94 may be used to scan a portion of a patient's musculoskeletal system while the position sensor 88 tracks the physical movement of the hybrid probe 94.
Because of the high reflectivity and attenuation of bone to ultrasound, ultrasound energy typically does not penetrate bone tissues to any significant degree. Therefore, soft tissues lying behind bone cannot be imaged and poses a challenge to ultrasound imaging of a joint. For example, as shown in
To acquire ultrasound images of a majority of the articulating surfaces, at least two degrees of flexion are required, including, for example, a full extension (
Turning now to
As shown in
When the RF signal 142, and if desired B-mode image, acquisition is complete for the first degree of flexion, the patient's knee 114 is moved to another degree of flexion and the reflected RF signal 142 acquired (Block 156). Again, if desired, the B-mode image may also be acquired (Block 158). The user then determines whether acquisition is complete or whether additional data is required (Block 160). That is, if visualization of a desired surface of one or more bones 116, 118, 120 is occluded (“NO” branch of decision block 160), then the method returns to acquire additional data at another degree of flexion (Block 156). If the desired bone surfaces are sufficiently visible (“YES” branch of decision block 160), then the method 150 continues.
After all data and RF signal acquisition is complete, the computer 54 is operated to automatically isolate that portion of the RF signal, i.e., the bone contour, from each of the plurality of RF signals. In that regard, the computer 54 may sample the echoes comprising the RF signals to extract a bone contour for generating a 3-D point cloud 165 (
Referring specifically now to
The model-based signal processing of the RF signal 142 begins with enhancing the RF signal by applying the model-based signal processing (here, the Bayesian estimator) (Block 167). To apply the Bayesian estimator, offline measurements are first collected from phantoms, cadavers, and/or simulated tissues to estimate certain unknown parameters, for example, an attenuation coefficient (i.e., absorption and scattering) and an acoustic impedance (i.e., density, porosity, compressibility), in a manner generally described in VARSLOT T (refer above), the disclosure of which is incorporated herein by reference, in its entirety. The offline measurements (Block 169) are input into the Bayesian estimator and the unknown parameters are estimated as follows:
z=h(x)+v (6)
P(t)=e(−βt
Where h is the measurement function that models the system and v is the noise and modeling error. In modeling the system, the parameter, x, that best fits the measurement, z, is determined. For example, the data fitting process may find an estimate of {circumflex over (x)} that best fits the measurement of z by minimizing some error norm, ∥∈∥, of the residual, where:
ε=z−h({circumflex over (x)}) (8)
For ultrasound modeling, the input signal, z, is the raw RF signal from the offline measurements, the estimate h({circumflex over (x)}) is based on the state space model with known parameters of the offline measurements (i.e., density, etc.). The error, v, may encompass noise, unknown parameters, and modeling errors in an effort to reduce the effect of v by minimizing the residuals and identifying the unknown parameters form repeated measurements. Weighting the last echo within a scan line by approximately 99%, as bone, is one example of using likelihood in a Bayesian framework. A Kalman filter may alternatively be used, which is a special case of the recursive Bayesian estimation, in which the signal is assumed to be linear and have a Gaussian distribution.
It would be readily appreciated that the illustrative use of the Bayesian model here is not limiting. Rather, other model-based processing algorithms or probabilistic signal processing methods may be used within the spirit of the present invention.
With the model-based signal processing complete, the RF signal 142 is then transformed into a plurality of envelopes to extract the individual echoes 162 existing in the RF signal 142. Each envelope is determined by applying a moving power filter to each RF signal 142 (Block 168) or other suitable envelope detection algorithm. The moving power filter may be comprised of a moving kernel of a length that is equal to the average length of an individual ultrasound echo 162. With each iteration of the moving kernel, the power of the RF signal 142 at the instant kernel position is calculated. One exemplary kernel length may be 20 samples; however, other lengths may also be used. The value of the RF signal 142 represents the value of the signal envelope at that position of the RF signal 142. Given a discrete-time signal, X, having a length, N, each envelope, V, using a moving power filter having length, L, is defined by:
In some embodiments, this and subsequent equations use a one-sided filter of varying length for the special cases of the samples before the sample
and after the sample
Each envelope produced by the moving power filter, shown in
Of the plurality of echoes 162 in the RF signal 142, one echo 162 is of particular interest, e.g., the echo corresponding to the bone-soft tissue interface. This bone echo (hereafter referenced as 162a) is generated by the reflection of the ultrasound energy at the surface of the scanned bone. More particularly, the soft tissue-bone interface is characterized by a high reflection coefficient of 43%, which means that 43% of the ultrasound energy reaching the surface of the bone is reflected back to the transducer array 68 of the ultrasound probe 60 (
Bone is also characterized by a high attenuation coefficient of the applied RF signal (6.9 db/cm/mHz for trabecular bone and 9.94 db/cm/mHz for cortical bone). At high frequencies, such as those used in musculoskeletal imaging (that is, in the range of 7-14 MHz), the attenuation of bone becomes very high and the ultrasound energy ends at the surface of the bone. Therefore, an echo 162a corresponding to the soft-tissue-bone interface is the last echo 162a in the RF signal 142. The bone echo 162a is identified by selecting the last echo having a normalized envelope amplitude (with respect to a maximum value existing in the envelope) above a preset threshold (Block 170).
The bone echoes 162a are then extracted from each frame 146 (Block 172) and used to generate the bone contour existing in that RF signal 142 and as shown in
Prior to implementing the SVM, the SVM may be trained to detect cartilage in RF signals. One such way of training the SVM includes information acquired from a database comprising of MRI images and/or RF ultrasound images to train the SVM to distinguish between echoes associated with cartilage from the RF signals 142, and from within the noise or in ambiguous soft tissue echoes. In constructing the database in accordance with one embodiment, knee joints from multiple patients are imaged using both MRI and ultrasound. A volumetric MRI image of each knee joint is reconstructed, processed, and the cartilage and the bone tissues are identified and segmented. The segmented volumetric MRI image is then registered with a corresponding segmented ultrasound image (wherein bone tissue is identified). The registration provides a transformation matrix that may then be used to register the raw RF signals 142 with a reconstructed MRI surface model.
After the raw RF signals 142 are registered with the reconstructed MRI surface model, spatial information from the volumetric MRI images with respect to the cartilage tissue may be used to determine the location of a cartilage interface on the raw RF signal 142 over the articulating surfaces of the knee joint.
The database of all knee joint image pairs (MRI and ultrasound) is then used to train the SVM. Generally, the training includes loading all raw RF signals, as well as the location of the bone-cartilage interface of each respective RF signal. The SVM may then determine the location of the cartilage interface in an unknown, input raw RF signal. If desired, a user may choose from one or more kernels to maximize a classification rate of the SVM.
In use, the trained SVM receives a reconstructed knee joint image of a new patient as well as the raw RF signals. The SVM returns the cartilage location on the RF signal data, which may be used, along with the tracking information from the tracking system (e.g., the optical tracking marker 86 and the position sensor 88) to generate 3-D coordinates for each point on the cartilage interface. The 3-D coordinates may be triangulated and interpolated to form a complete cartilage surface.
Referring still to
Isolated outliers are those echoes 162 in the RF signal 142 that correspond to a tissue interface that is not the soft-tissue-bone interface. Selection of the isolated outliers may occur when the criterion is set too high. If necessary, the isolated outliers may be removed (Block 176) by applying a median filter to the bone contour. That is, given a particular bone contour, X, having a length, N, with a median filter length, L, the median-filter contour, Yk, is:
False bone echoes are those echoes 162 resulting from noise or a scattering echo, which result in a detected bone contour in a position where no bone contour exists. The false bone echoes may occur when an area that does not contain a bone is scanned, the ultrasound probe 60 is not oriented substantially perpendicular with respect to the bone surface, the bone lies deeper than a selected scanning depth, the bone lies within the selected scanning depth but its echo is highly attenuated by the soft tissue overlying the bone, or a combination of the same. Selection of the false bone echoes may occur when the preset threshold is too low.
Frames 146 containing false bone echoes should be removed. One such method of removing the false bone echoes (Block 178) may include applying a continuity criteria. That is, because the surface of the bone has a regular shape, the bone contour, in the two-dimensions of the ultrasound image, should be continuous and smooth. A false bone echo will create a non-continuity, and exhibits a high degree of irregularity with respect to the bone contour.
One manner of filtering out false bone echoes is to apply a moving standard deviation filter; however, other filtering methods may also be used. For example, given the bone contour, X, having a length, N, with a median filter length, L, the standard deviation filter contour:
Where Yk is the local standard deviation of the bone contour, which is a measure of the regularity and continuity of the bone contour. Segments of the bone contour including a false bone echo are characterized by a higher degree of irregularity and have a high Yk value. On the other hand, segments of the bone contour including only echoes resulting from the surface of the bone are characterized by high degree regularity and have a low Yk value.
A resultant bone contour 180, resulting from applying the moving median filter and the moving standard deviation filter, includes a full length contour of the entire surface of the bone, one or more partial contours of the entire surface, or contains no bone contour segments.
With the bone contours isolated from each of the RF signals, the bone contours may now be transformed into a point cloud. For instance, returning now to
To transform the resultant bone contour 180 into the 3-D contour, each detected bone echo 162a undergoes transformation into a 3-D point as follows:
Where the variables are defined as follows:
If so desired, an intermediate registration process may be performed between the resultant bone contour and a B-mode image, if acquired (Block 190). This registration step is performed for visualizing the resultant bone contour 180 with the B-mode image (
P
echo
I=(IecholxdechoIy (16)
Where lx and Iy denote the B-mode image resolution (pixels/cm) for the x- and y-axes respectively. PechoI denotes the coordinates of the bone contour point relative to the ultrasound frame.
After the resultant bone contours 180 are transformed and, if desired, registered (Block 190) (
To begin the second registration process, as shown in
After the point clouds 194 are formed, a bone model may be optimized in accordance with the point clouds 194. That is, the bone point cloud 194 is then used to reconstruct a 3-D patient-specific model of the surface of the scanned bone. The reconstruction begins with a determination of a bone model from which the 3-D patient-specific model is derived (Block 210). The bone model may be a generalized model based on multiple patient bone models and may be selected from a principal component analysis (“PCA”) based statistical bone atlas. One such a priori bone atlas, formed in accordance with the method 212 of
PCA is then performed on each model in the dataset to extract the modes of variation of the surface of the bone (Block 218). Each mode of variation is represented by a plurality of eigenvectors resulting from the PCA. The eigenvectors, sometimes called eigenbones, define a vector space of bone morphology variations extracted from the dataset. The PCA may include any one model from the dataset, expressed as a linear combination of the eigenbones. An average model of all of the 3-D models comprising the dataset is extracted (Block 220) and may be defined as:
Where the variables are defined as follows:
Furthermore, any new model, Mnew (i.e., a model not already existing in the dataset), may be approximately represented by new values of the shape descriptors (eigenvectors coefficients) as follows:
Where the variables are defined as follows:
The accuracy of Mnew is directly proportional to the number of principal components (W) used in approximating the new model and the number of models, L, of the dataset used for the PCA. The residual error or root mean square error (“RMS”) for using the PCA shape descriptors is defined by:
Therefore, the RMS when comparing any two different models, A and B, having the same number of vertices is defined by:
Where VAj is the jth vertex in model A, and similarly, VBj is the jth vertex in model B.
Returning again to
Changing the shape descriptors to optimize the loaded model (Block 240) may be carried out by one or more optimization algorithms, guided by a scoring function, to find the values of the principal components coefficients to create the 3-D patient-specific new model and are described with reference to
The first algorithm may use a numerical method of searching the eigenspace for optimal shape descriptors. More specifically, the first algorithm may be an iterative method that searches the shape descriptors of the loaded model to find a point that best matches the bone point cloud 194 (Block 250). One such iterative method may include, for example, Powell's conjugate gradient descent method with a RMS as the scoring function. The changes are applied to the shape descriptors of the loaded model by the first algorithm to form a new model, Mnew, (Block 252) defined by Equation 19. The new model, Mnew, is then compared with the bone point cloud 194 and the residual error, E, calculated to determine whether a further iterative search is required (Block 254). More specifically, given a bone point cloud, Q, having n points therein, and an average model, Mavg, with I vertices, there may be a set of closest vertices, V, in the average model, Mavg to the bone point cloud, Q.
Where vi is the closest point in the set, V, to qi in the bone point cloud, Q. An octree may be used to efficiently search for the closest points in Mnew. The residual error, E, between the new model, Mnew and the bone point cloud, Q, is then defined as:
E=∥V−Q∥
2 (23)
With sufficiently high residual error (“YES” branch of Block 254), the method returns to further search the shape descriptors (Block 250). If the residual error is low (“NO” branch of Block 254), then the method proceeds.
The second algorithm of the two-step method refines the new model derived from the first algorithm by transforming the new model into a linear system of equations in the shape descriptors. The linear system is easily solved by linear system equation, implementing conventional solving techniques, which provide the 3-D patient-specific shape descriptors.
In continuing with
And may also be expressed in terms of the new model's shape descriptors as:
Where Vavg is the set of vertices from the loaded model's vertices, which corresponds to the vertices set, V, that contains the closest vertices in the new model, Mnew, that is being morphed to fit the bone point cloud, Q. Uk′ is a reduced version of the eigenbone, Uk, containing only the set of vertices corresponding to the vertices set, V.
Combining Equations 24 and 25, E maybe expressed as:
Where vavg,I is the ith vertex of Vavg. Similarly, uk′,I is the ith vertex of the reduced eigenbone, Uk′.
The error function may be expanded as:
E=Σ
i=1
m[(xavg,i+Σi=3Wakxu′,i,i−xq,i)2+(yavg,i+Σi=1Wakyu′,i,i−yq,i)2+(zavg,i+Σi=3Waizu′,i,i−zq,i)2] (27)
Where xavg,I is the x-coordinate of the ith vertex of the average model, xk,I is the x-coordinate of the ith vertex of the kth eigenbone, and xQ,I is the x-coordinate of the ith point of the point cloud, Q. Similar arguments are applied to the y- and z-coordinates. Calculating the partial derivative of E with respect to each shape descriptor, αk, yields:
Recombining the coordinate values into vectors yields:
And with rearrangement:
Reformulating Equation 31 into a matrix form provides a linear system of equations in the form of Ax=B:
The linear system of equations may be solved using any number of known methods, for instance, singular value decomposition (Block 258).
In one embodiment, the mahalanobis distance is omitted because the bone point clouds are dense, thus providing a constraining force on the model deformation. Therefore the constraining function of the mahalanobis distance may not be needed, but rather was avoided to provide the model deformation with more freedom to generate a new model that best fit the bone point cloud.
An ultrasound procedure in accordance with the embodiments of the present invention may, for example, generate approximately 5000 ultrasound images. The generated 3-D patient-specific models (Block 260,
The solution to the linear set of equations provides a description of the patient-specific 3-D model, derived from an average, or select model, from the statistical atlas, and optimized in accordance with the point cloud transformed from a bone contour that was isolated from a plurality of RF signals. The solution may be applied to the average model to display a patient-specific 3-D bone model for aiding in pre-operative planning, mapping out injection points, planning a physical therapy regiment, or other diagnostic and/or treatment-based procedure that involves a portion of the musculoskeletal system.
Cartilage 3-D models may be reconstructed a method that is similar to that which was outlined above for bone. During contour extraction, the contour of the cartilage is more difficult to detect than bone. Probabilistic modeling (Block 171) is used to process the raw RF signal to more easily identify cartilage, and SVM aids in detection of cartilage boundaries (Block 173) based on MRI training sets. A cartilage statistical atlas is formed by a method that may be similar to what was described for bone; however, as indicated previously, MRI is used rather than the CT (which was the case for bone). The segmentation (Block 216), variation extraction (Block 218) and base model morphing (Block 240) (
Referring now to
Reflected ultrasound signals, or echoes 364, are received by the ultrasound probe 60 and converted into RF signals that are transmitted to the transceiver 356. Each RF signal may be generated by a plurality of echoes 364, which may be isolated, partially overlapping, or fully overlapping. Each of the plurality of echoes 364 originates from a reflection of at least a portion of the ultrasound energy at an interface between two tissues having different densities, and represents a pulse-echo mode ultrasound signal. One type of pulse-echo mode ultrasound signal is known as an “A-mode” scan signal. The controller 360 converts the RF signals into a form suitable for transmission to the computer 54, such as by digitizing, amplifying, or otherwise processing the signals, and transmits the processed RF signals to the computer 54 via the I/O interface 85. In an embodiment of the invention, the signals transmitted to the computer 54 may be raw RF signals representing the echoes 364 received by the ultrasound probe 60.
The electromagnetic tracking system 87 includes an electromagnetic transceiver unit 328 and an electromagnetic tracking system controller 366. The transceiver unit 328 may include one or more antennas 368, and transmits a first electromagnetic signal 370. The first electromagnetic signal 370 excites the tracking marker 86, which responds by transmitting a second electromagnetic signal 372 that is received by the transceiver unit 328. The tracking system controller 366 may then determine a relative position of the tracking marker 86 based on the received second electromagnetic signal 372. The tracking system controller 366 may then transmit tracking element position data to the computer 54 via I/O interface 85.
Referring now to
The first tier of the three-tier system optimizes the raw signal data and estimates the envelope of the feature vectors. The second tier estimates the features detected from each of the scan lines from the first tier, and constructs the parametric model for Bayesian smoothing. The third tier estimates the features extracted from the second tier to further estimate the three-dimensional features in real-time using a Bayesian inference method.
In block 382, raw RF signal data representing ultrasound echoes 364 detected by the ultrasound probe 60 is received by the program code 83 and processed by a first layer of filtering for feature detection. The feature vectors detected include bone, fat tissues, soft tissues, and muscles. The optimal outputs are envelopes of these features detected from the filter. There are two fundamental aspects of this design. The first aspect relates to the ultrasound probe 60 and the ultrasound controller firmware. In conventional ultrasound machines, the transmitted ultrasound signals 362 are generated at a fixed frequency during scanning. However, it has been determined that different ultrasound signal frequencies reveal different soft tissue features when used to scan the patient. Thus, in an embodiment of the invention, the frequency of the transmitted ultrasound signal 362 changes with respect to time using a predetermined excitation function. One exemplary excitation function is a linear ramping sweep function 383, which is illustrated in
The second aspect is to utilize data collected from multiple scans to support a Bayesian model for estimation, correction, and optimization. Two exemplary filter classes are illustrated in
In block 388, an optimal time delay is estimated using a Kalman class filter to identify peaks in the amplitude or envelope of the RF signal. Referring now to
p
k,fk
=E(sobs) (33)
where E is an envelope detection and extraction function. The peak data matrix (pk,fk) thereby comprises a plurality of points representing the signal envelope 392, and can be used to predict the locations of envelope peaks 394, 396, 398 produced by frequency fk+1 using the following equation:
p
est,fk+1
=H(pk,fk+1) (34)
where H is the estimation function.
Referring now to
ε=pest,fk−1−pk,fk (35)
and the error correction (Kalman) gain (Kk) is computed by:
K
k
=P
k
−
H
T(HPk−HT+R) (36)
where Pk
p
est,k+1
=p
k,fk
+K
k(ε) (37)
and the error covariance is updated by:
P
k=(1−KkH)Pk− (38)
If the second class of filter is to be used, the program code 83 proceeds to block 410 rather than block 386 of flow chart 380, and selects a non-linear, non-Gaussian model that follows the recursive Bayesian filter approach. In the illustrated embodiment, a Sequential Monte Carlo method, or particles filter is shown as an exemplary implementation of the recursive Bayesian filter. In block 412, the program code 83 estimates an optimal time delay using the particles filter, to identify signal envelope peaks. An example of a particles filter is illustrated in
ρk,fki:1−N˜(pk,fk|sobs) (39)
These particles 411, 414, 416 predict the peak locations at fk+1 via the following equation:
p
est,k+1
i:1−N
=H(ρk,fki:1−N) (40)
where H is the estimation function.
Referring now to
εki:1−N=pest,k+1i:1−N−pk,fk (41)
The normalized importance weights of the particles of particle sets 424, 426, 428 are evaluated as:
which produces weighted particle sets 436, 438, 440. This step is known as importance sampling where the algorithm approximates the true probability density of the system. An example of importance sampling is shown in
p
k,fk=(wki:1−N,pest,fk+1i:2−N (44)
In addition, particle maintenance may be required to avoid particle degeneracy, which refers to a result in which the weight is concentrated onto a few particles over time. Particle re-sampling can be used by replacing degenerated particles with new particles sampled from the posterior density:
(pest,fk+1i:2−N) (44)
Referring now to
This is achieved in embodiments of the invention by Bayesian model smoothing, which produces the exemplary smoothed contour line 456. The principle is to examine the signal envelope data retrospectively and attempt to reconstruct the previous state. The primarily difference between the Bayesian estimator and the smoother is that the estimator propagates the states forward in each recursive scan, while the smoother operates in the reverse direction. The initial state of the smoother begins at the last measurement and propagates backward. A common implementation of a smoother is the Rauch-Tung-Striebel (RTS) smoother. The feature embedded in the ultrasound signal is initialized based on a priori knowledge of the scan, which may include ultrasound transducer position data received from the electromagnetic tracking system 87. Sequential features are then estimated and updated in the ultrasound scan line with the RTS smoother.
In an embodiment of the invention, the ultrasound probe 60 is instrumented with the electromagnetic or optical tracking marker 86 so that the motion of the ultrasound probe 60 is accurately known. This tracking data 460 is provided to the program code 83 in block 462, and is needed to determine the position of the ultrasound probe 60 since the motion of the ultrasound probe 60 is arbitrary relative to the patient's joint. As scans are acquired by the ultrasound probe 60, the system estimates 3-D features of the joint, such as the shape of the bone and soft tissue. A tracking problem of this type can be viewed as a probability inference problem in which the objective is to calculate the most likely value of a state vector Xi given a sequence of measurements yi, which are the acquired scans. In an embodiment of the invention, the state vector Xi is the position of the ultrasound probe 60 with respect to some fixed known coordinate system or “world frame” (such as the ultrasound machine at time k=0), as well as the modes of the bone deformation. Two main steps in tracking are:
A system dynamics model relates the previous state Xi−1 to the new state X, via the transitional distribution P(Xi|Xi−1), which is a model of how the state is expected to evolve with time. In an embodiment of the invention, Xi are the 3-D feature estimates calculated from the Bayesian contour estimation performed during tier 2 filtering, and the transformation information contains the translations and rotations of the data obtained from the tracking system 87. With joint imaging, the optimal density or features are not expected to change over time, because the position of the bone is fixed in space and the shape of the bone scanned does not change. Hence, the transitional distribution does not alter the model states.
A measurement model relates the state to a predicted measurement, y=f(X). Since there is uncertainty in the measurement, this relationship is generally expressed in terms of the conditional probability P(yi|Xi), also called the likelihood function. In an embodiment of the invention, the RF signal and a priori feature position and shape are related by an Anisotropic Iterative Closest Point (AICP) method.
To estimate position and shape of the feature, the program code 83 proceeds to block 464. At block 464, the program code 83 performs an AICP method that searches for the closest point between the two datasets iteratively to establish a correspondence by the anisotropic weighted distance that is calculated from the local error covariance of both datasets. The correspondence is then used to calculate a rigid transformation that is determined iteratively by minimizing the error until convergence. The 3-D features can then be predicted based on the received RF signal and the a priori feature position and shape. By calculating the residual error between the predicted 3-D feature and the RF signal data, the a priori position and shape of the feature are updated and corrected in each recursion. Using Bayes' rule, the posterior distribution can be computed based on measurements from the raw RF signal.
If both the dynamic model and the measurement model are linear with additive Gaussian noise, then the conditional probability distributions are normal distributions. In particular, P(Xi|y0, y1, . . . , yi) is unimodal and Gaussian, and thus can be represented using the mean and covariance of the predicted measurements. Unfortunately, the measurement model is not linear and the likelihood function P(yi|Xi) is not Gaussian. One way to deal with this is to linearize the model about the local estimate, and assume that the distributions are locally Gaussian.
Referring to
Instead of treating the probability distributions as Gaussian, a statistical inference can be performed using a Monte Carlo sampling of the states. The optimal position and shape of the feature are thereby estimated through the posterior density, which is determined from sequential data obtained from the RF signals. For recursive Bayesian estimation, one exemplary implementation is particle filtering, which has been found to be useful in dealing in applications where the state vector is complex and the data contain a great deal of clutter, such as tracking objects in image sequences. The basic idea is to represent the posterior probability by a set of independent and identically distributed weighted samplings of the states, or particles. Given enough samples, even very complex probability distributions can be represented. As measurements are taken, the importance weights of the particles are adjusted using the likelihood model, using the equation wj′=P(yi|Xi) wj, where wj is the weight of the j-th particle. This is known as importance sampling.
The principal advantage of this method is that the method can approximate the true probability distribution of the system, which cannot be determined directly, by approximating a finite set of particles from a distribution from which samples can be drawn. As measurements are obtained, the algorithm adjusts the particle weights to minimize the error between the prediction and observation states. With enough particles and iterations, the posterior distribution will approach the true density of the system. A plurality of bone or other anatomical feature surface contour lines is thereby generated that can be used to generate 3-D images and models of the joint or anatomical feature. These models, in turn, may be used to facilitate medical procedures, such as joint injections, by allowing the joint or other anatomical feature to be visualized in real time during the procedure using an ultrasound scan.
International Publication Number WO2014/121244, published Aug. 7, 2014, of International Application No. PCT/US2014/014526, filed Feb. 4, 2014, describes exemplary 3-D reconstruction of joints using ultrasound and is incorporated by reference herein in its entirety.
3-D Reconstruction Using Multiple Imaging Modalities
The method 500 may include an operation 502, including obtaining a preliminary virtual 3-D bone model 504 of one or more bones. For example, an ultrasound scanning and 3-D reconstruction process described above in the 3-D Reconstruction of Joints Using Ultrasound section may be utilized to generate the preliminary virtual 3-D bone model 504. Although the 3-D bone model 504 may comprise the final output of the 3-D reconstruction processes described above, in the context of this method 500, the bone model 504 may be “preliminary” because it may be subject to refinement in later operations.
Referring to
While the above examples pertain to exterior surfaces of bones that were occluded from ultrasound imaging by other bones, in some embodiments, significant internal features of some bones may not be included in the point clouds 506, 508. For example, in the context of hip replacement surgery, the intermedullary canal of the femur may be an important anatomical feature for selecting, sizing, and/or installing the femoral implant, particularly the femoral stem. Because ultrasound signals generally may not penetrate the first bone (i.e., bone surface) that they encounter, internal features of bones, such as internal bone canals (such as the medullary cavity of the femur), may not be included as part of the point clouds 506, 508 obtained using ultrasound imaging. Similarly, some other tissues, such as cartilage, tendons, and/or ligaments, may not be visible using ultrasound because such tissues may be occluded, such as by bone.
Referring to
In other embodiments, some portions of the relevant anatomy may not be included in the generalized 3-D models. For example, some generalized 3-D bone models may not include some features, such as the medullary cavities.
Referring back to
Referring to
In the illustrated embodiment, the supplemental image 516 includes at least some portions of the pertinent anatomy that were not included in the respective point clouds 506, 508. For example, in the illustrated embodiment, the supplemental image 516 depicts the size and/or shape of the femoral head 510A, the size and/or shape of the acetabular cup 512A, and/or the size and/or shape of the intermedullary canal 510B of the femur 510. More generally, in the illustrated embodiment, the supplemental image 516 includes data pertaining to at least one portion of the anatomy that was not included in the imaging (e.g., ultrasound imaging) used to generate the preliminary 3-D model 504. For example, supplemental images 516 comprising X-rays may clearly reveal internal structures of bones.
Referring back to
Prior to or concurrent with performing 2-D-3-D registration 518, the exemplary process 500 may include image distortion correction. Specifically, using the 3D preliminary models output from the initial ultrasound imaging, the 2-D images from the additional imaging modality (e.g., X-ray) may be processed by an image distortion correction algorithm that calculates an image magnification factor and the anatomical feature orientation in 3-D space relative to the imaging modality images. Specifically, the image distortion correction algorithm registers the 3-D bone models with the 2-D images to determine the optimal magnification factor and anatomy position in 3-D relative to the 2-D images. The comparison between the 3-D model and the 2-D images is carried out for all or a plurality of the 2-D images, which allows the algorithm to register the 2-D images in space and extract the surface contours in areas where ultrasound data couldn't be collected. Examples where ultrasound data might not be available include, without limitation, the femoral head, the femoral intramedullary canal, and the acetabular cup.
Referring to
In some example embodiments, it may not be necessary to separately determine the scale and/or magnification of the supplemental image 516 (e.g., X-ray image). Specifically, because the sizes of the anatomical structures may be determined from the ultrasound point clouds 506, 508, the scale/magnification of the supplemental image 516 may be determined in connection with the registration operation 518.
Referring back to
Referring to
Referring back to
Referring to
Additionally, in some example embodiments in which statistical atlas generalized 3-D bone models may not include some features, the geometric data extracted from the supplemental image 516 may be used to add such features to the preliminary 3-D model 504. For example, in the illustrated embodiment, the generalized 3-D bone model may not include the intermedullary canal of the femur 510B. Accordingly, the preliminary 3-D model may not include the intermedullary canal 510B. The intermedullary canal 510B may be visible on the supplemental image 516, and relevant geometric information pertaining to the intermedullary canal 510B may be extracted from the supplemental image 516 in operation 520. In the illustrated embodiment, the geometric information pertaining to the intermedullary canal 510B extracted from the supplemental image 516 may be used in the refining operation 522 to add the intermedullary canal 510B feature, thus yielding a refined 3-D model 524 including a patient-specific representation of the intermedullary canal 510B.
In the illustrated embodiment, the refined 3-D model 524 may include the external topography of the femur and/or the pelvis in the vicinity of the hip joint. In some example embodiments, the refined 3-D model 524 may be used to obtain anatomical measurements of pertinent anatomical features. For example,
For example, referring to
In some example embodiments, various methods described herein (e.g., method 500) may be performed preoperatively and/or the generated models (e.g., the refined 3-D model 524) may be used preoperatively (e.g., for surgical planning, such as femoral stem sizing, determining cup placement, etc.), intraoperatively (e.g., for surgical navigation), and/or postoperatively (e.g., for postoperative assessment). In some example embodiments, 3-D models generated by example methods described herein may be registered intraoperatively using ultrasound. Unlike intraoperative fluoroscopy, intraoperative use of the 3-D models with ultrasound registration does not expose the patient or nearby personnel (e.g., surgeon) to ionizing radiation. In some example embodiments, preliminary 3-D models 504 may be registered intraoperatively using ultrasound and utilized intraoperatively, without being refined by supplemental images 516.
Additionally, the present disclosure contemplates that in the context of total hip replacements, many total hip arthroplasty procedures may be performed using a posterolateral approach and/or anterolateral approach. Using these approaches typically involves placing the patient in the lateral decubitus position. With the patient positioned laterally as such, radiographic imaging is generally not capable of accurately assessing femoral and/or acetabular version. Intraoperative ultrasound registration of the 3-D models generated according to at least some aspects of the present disclosure may allow accurate determination of femoral version, acetabular version, and/or combined version intraoperatively, such as in real-time or near real-time. More generally, intraoperative ultrasound registration of 3-D models may be useful where patient positioning during surgery is not conducive to registration using other imaging modalities.
It should be understood that while the examples described herein may involve the hip joint and describe 3-D models of the femur and pelvis, it is within the scope of the disclosure that the femur and pelvis be replaced by any one or more anatomical structures (e.g., one or more bones or soft tissues) to achieve similar outcomes. By way of a more detailed example, the shoulder joint may be the subject of this exercise with ultrasound imaging taken of the scapula and proximal humerus. After preliminary patient-specific virtual 3-D bone models are created, these bone models may be further refined using one or more 2-D X-ray images.
Various exemplary embodiments according to at least some aspects of the present disclosure may include apparatus (e.g., ultrasound instrument 50 (
Determination of Spine-Pelvis Tilt
The present disclosure contemplates that preoperative planning for some surgeries may include functional assessments and planning. For example, in preoperative planning for hip replacement surgeries, it may be useful to consider the patient's spine-pelvis tilt at one or more functional positions.
The method 600 may include an operation 606A, which may include obtaining one or more ultrasound point clouds 608 of the patient's pelvis and/or spine, in a first of a series of functional positions. In the illustrated embodiment, the operation 606A may include obtaining a 3-D ultrasound point cloud of the pelvis 608A and/or an ultrasound point cloud of at least a portion of the spine 608B (e.g., lumbar and/or sacrum) in a first functional position (e.g., sitting). In some example embodiments, the point cloud 608 may be generally sparse. In some example embodiments, the point cloud 608 may include additional points pertaining to the patient's femur, which may facilitate determination of the femoral version, the acetabular version, and/or the combined version. For example, the point cloud 608 may include data sufficient to identify the transepicondylar and/or posterior condylar axis of the femur to determine the femoral version angle reference axis. Further, in some example embodiments, information pertaining to leg length may be obtained. For example, data from at least one X-ray taken with the patient in a standing position may be obtained.
The method 600 may include an operation 612A, which may include registering at least a portion of the point cloud 608 with the 3-D model 604 of the pelvis. In the illustrated embodiment, the ultrasound point cloud of the pelvis 608A may be registered with the 3-D model 604 of the pelvis.
The method 600 may include an operation 614A, which may include determining the spine-pelvis tilt in the first functional position using the relative angle of the point cloud of the spine 610A to the 3-D model 604 of the pelvis.
The method 600 may include operations 606B, 612B, and 614B, which may be substantially similar to operations 606A, 612A, and 614A, respectively, except that they are performed with the relevant anatomy in a second functional position (e.g., standing). Similarly, the method 600 may include operations 606C, 612C, and 614C, which may correspond generally to operations 606A, 612A, and 614A, respectively, except that they are performed with the relevant anatomy in a third functional position (e.g., supine).
Some example embodiments may allow 3-D visualization of the spine-pelvis interaction and/or may provide information about the spine-pelvis interaction in each functional position.
Various exemplary embodiments according to at least some aspects of the present disclosure may include apparatus (e.g., ultrasound instrument 50 (
3-D Soft Tissue Reconstruction
Some exemplary embodiments described herein may focus on generating virtual 3-D models of bones. Some other embodiments according to the present disclosure may include generating virtual 3-D models focusing on and/or including soft tissues, such as ligaments, tendons, cartilage, etc. For example,
The method 700 may include an operation 716, including reconstructing a joint (e.g., knee 704) using ultrasound. This operation 716 may be performed generally in the manner described elsewhere herein and/or may include obtaining one or more point clouds 718, 720 associated with bones 706, 708. Operation 716 may produce one or more patient-specific virtual 3-D images and/or models of one or more anatomical structures associated with the joint 702, as described in detail elsewhere herein. For example, the output of operation 716 may comprise one or more patient-specific virtual 3-D bone models.
The method 700 may include an operation 720, including automatically detecting one or more ligament loci (e.g., locations where a ligament attaches to a bone) on the patient-specific virtual 3-D bone model(s). For example, operation 720 may include determining the insertion locations 722, 724 of the medial collateral ligament 710. All ligament loci are pre-defined in a template bone model, which is stored in a .IV format, a special 3D surface representation that maintains model correspondence during processing. The ligament loci are specified by a set of indices stored in the text format. To detect the ligament loci in a patient-specific bone model, the system reconstructs a virtual 3D model of the patient-specific bone that maintains the correspondence to the template bone model. Although the virtual 3D patient-specific bone model and the template bone model may differ, they share some common characteristics of the same bone. By specifying the ligament loci in the template bone model, the system is able to automatically detect the ligament loci in the patient-specific bone model. This innovative approach allows for a more accurate and efficient detection of ligament loci, which is essential for various medical applications.
The method 700 may include an operation 726, including ultrasound scanning of at least a portion of at least one ligament associated with at least one of the detected ligament loci 722, 724. For example, in the illustrated embodiment, the medial collateral ligament 710 may be scanned using an ultrasound probe 728.
In some example embodiments, an ultrasound operator may be provided with automated guidance information for performing the ligament scan. For example, a display 730, which may be shown on an output device such as monitor 58 (
The method 700 may include an operation 736, including reconstructing a virtual 3-D model of the soft tissue (e.g., ligament 710). On exemplary method includes using ultrasound. Ultrasound is a dynamic imaging modality meaning, if you fix the transducer in place and move the object being imaged it will capture changes in geometry of that object and its spatial location. With that in mind, the exemplary method may use the reconstructed bone and points thereon, similar to GPS coordinates, to guide the user of the ultrasound transducer to move the transducer to specific locations where soft tissues can be imaged, such as at tendon, muscle, and ligament attachment locations of the bone. At each predetermined location, the user holds the transducer relatively stationary while repositioning the anatomical joint, thus allowing capture of ultrasound data regarding soft tissue locations and changes in cross-sectional area of soft tissue. This information is utilized along with the anatomical joint anatomy and changes in joint flexion angles to construct 3-D soft tissues.
In addition or alternatively, the present method may make use of machine learning to generate 3D models of soft tissues. By way of example, machine learning may include 2-D and/or 3-D data training sets having predetermined features that are identified and associated with specific soft tissues. In exemplary form, as referenced herein, dynamic ultrasound imaging may be utilized in order to image the motion of soft tissues in real-time. By combining machine learning with dynamic ultrasound imaging, the accuracy and efficiency of the 3-D soft tissues constructed is improved.
In some example embodiments, one or more of operations 716, 720, 726, 736 may be repeated one or more times, such as at one or more joint angles across a joint's range of motion. Accordingly, in some example embodiments, a virtual 3-D model of a ligament through a range of motion may be generated.
Various exemplary embodiments according to at least some aspects of the present disclosure may include apparatus (e.g., ultrasound instrument 50 (
Guided Diagnostic Scans
Some example embodiments may be configured to provide guidance to an ultrasound operator, which may facilitate improved, more precise, and/or more repeatable ultrasound scans results than relying solely on the skill and/or experience of the ultrasound operators. In some example embodiments, guidance may be provided in an automated manner. For example, arrow 732 in
Some example embodiments may be configured to provide one or more displays, such as on monitor 58 (
Referring to
Referring to
Referring to
Referring to
Bone Registration System for Intra-Operative Surgery
Intra-operative surgical procedures that involve bone manipulation benefit from accurate registration of the patient's bone or tissue model to the patient's actual anatomy. Precise alignment of the pre-operative 3D patient-specific anatomical model with the intra-operative patient anatomy can help in reducing surgical complications and improving the overall surgical outcome. However, current registration techniques have limitations in achieving accurate alignment, especially in cases where the patient's tissues (including bone and soft tissues) have undergone deformities or changes. Most existing anatomical registration methods are performed after making an incision, leading to blood loss and other surgical complications. Therefore, there is a need for a reliable, accurate, non-invasive, and bloodless anatomical registration system for intra-operative surgery that can overcome the limitations of the existing techniques.
The present disclosure provides an anatomical registration system that enables accurate registration of a pre-operative 3D patient-specific anatomical model to the intra-operative patient anatomy. The anatomical registration system comprises an ultrasound probe, a computer algorithm, and a point cloud registration module. The system uses a combination of anatomical landmarks and ultrasound scans to achieve accurate alignment of the pre-operative 3D patient-specific anatomical model with the intra-operative patient anatomy.
In exemplary form, the anatomy registration system for intra-operative surgery may include an initial registration step, which is preferably performed before making any incision is made intra-operatively. This initial registration step may include identifying one or more (e.g., two, three, four, or more) predefined anatomical landmarks on a pre-operative 3-D patient specific anatomical model. By way of example, these anatomical landmarks may be readily recognizable features such as, without limitation, the tip of a bone or an attachment point of muscle to a bone. Post identifying one or more anatomical landmarks, an ultrasound probe (including an ultrasound transducer) may be utilized to scan corresponding anatomy of the patient corresponding to each anatomical landmark so that ultrasound data is recorded while tracking the 3-D position of the ultrasound transducer. Using the ultrasound data and position tracking data, an algorithm uses a feature-based method to estimate the position and orientation (pose) of the pre-operative 3D patient-specific anatomical model with respect to the intra-operative patient model generated using ultrasound in the operating room, thereby initially registering the pre-operative 3D patient-specific anatomical model to the patient's intraoperative anatomy.
Post initial registration, the exemplary methods disclosed herein may make use of a refined registration. The refined registration may take place after the pre-operative 3D patient-specific anatomical model is aligned with the intra-operative patient bone, where the ultrasound transducer is repositioned to scan the intra-operative patient's anatomy to generate a 3-D point cloud corresponding to points on one or more surfaces of the patient's anatomy (e.g., bone surface points). The 3-D point cloud may then be registered to the pre-operative 3D patient-specific anatomical model to fine-tune the position of the pre-operative 3D patient-specific anatomical model. In exemplary form, registration of the 3-D point cloud to the 3-D anatomical model may be performed by aligning the point cloud to the pre-operative 3D patient-specific anatomical model using an iterative closest point (ICP) algorithm. The ICP algorithm minimizes the distance between the corresponding points of the pre-operative 3D patient-specific anatomical model and the point cloud, thus achieving accurate alignment.
The exemplary bone registration system and method may provide one or more advantages. By way of example, one advantage is the accurate alignment of the pre-operative 3D patient-specific anatomical model with the intra-operative patient anatomy. Another advantage is improved surgical outcomes and reduced complications. And a further advantage is the ability to register preoperative models with the patient's intraoperative anatomy where the anatomy has undergone significant changes or exhibits material deformities. The exemplary system and methods can be used in any surgical procedure where correlating the virtual realm with the real-world is advantageous.
Non-Invasive Pinless Bone and Spine Tracking System and Method Using Ultrasound and Localization Technology
The present disclosure provides a system and associated non-invasive methods for tracking a patient's anatomy (including bones, such as the pelvis and vertebrae) during surgery using ultrasound and localization technology.
In orthopedic and spine surgeries, computer-assisted surgery is used to track and locate bones and the spine using bone arrays or optical trackers. These trackers are typically mounted to the patient's bone, which may cause longer recovery time and potential complications after surgery. Therefore, there is a need for a non-invasive method to track a patient's anatomy (including bones such as the pelvis and vertebrae) during surgery.
The present disclosure provides pinless bone and spine tracking system and method using a custom-made ultrasound probe. An exemplary ultrasound probe may include an anatomical shape compatible with soft tissue around the target bone or spine. By way of example, the ultrasound probe may be shaped to engage an exterior of the patient anatomy in only a single position and orientation so that signals received by the ultrasound probe during surgical tracking have a fixed frame of reference. Specifically, the bone or spine surface is detected by the ultrasound probe and used to generate a 3-D point cloud representative of surface points on the bone or spine. These surface points are constantly tracked in real time (using one or more electromagnetic (EM) sensors, optical arrays, inertial measurement units, etc.) in order to provide information to a surgeon regarding the current position and orientation of the anatomy.
In exemplary form, an exemplary pinless bone and spine tracking system and methods may make use of a customized ultrasound probe having an anatomical shape compatible with soft tissue around the target bone or spine. The ultrasound probe includes an ultrasound transducer that detects the bone or spine surface by receiving ultrasound echoes and generating data representative of the echoes that is used by the computer system and associated algorithms to generate a 3-D point cloud representation of the bone or spine in a static position. The ultrasound probe may be outfitted with a tracker in order to track the 3-D position and orientation of the ultrasound probe. Exemplary trackers include, without limitation, electromagnetic (EM) sensors, optical arrays, and inertial measurement units. After the 3-D point cloud is generated, and the 3-D point cloud is registered to the patient's anatomy, as discussed above. Post registration, the ultrasound probe may be utilized to track a bone or vertebrae combining motion signals from the ultrasound probe and tissue depth measurements from echoes received by the ultrasound transducer, thereby providing a real-time, non-invasive tracking of anatomy.
By way of example, an exemplary system using the pinless bone and spine tracking system may be used by positioning the ultrasound probe is placed on the skin of the patient proximate the target bone or vertebrae. Thereafter, the ultrasound probe may be repositioned in 3-D, while its 3-D position and orientation are tracked, in order to perform a scan of the patient's anatomy (that is preferably stationary). Those skilled in the art are familiar with ultrasound transducers and scans of a patient's anatomy and, accordingly, a detailed description of this aspect of the method is omitted in furtherance of brevity. As positional and orientation information concerning the ultrasound probe are fed to the computer during the scan, the ultrasound probe is likewise generating signal data indicative of echoes detected by the transducer(s), which allows the computer system to generate 3-D points corresponding to surface points for the anatomy in question, such as one or more bone such as vertebrae. Those skilled in the art will realize that these 3-D points when combined are operative to form a 3-D point cloud, which point cloud is representative of the patient's anatomy or real-world anatomy. Thereafter the system operates to register the point cloud to a patient-specific anatomical model generated pre-operatively (and possibly supplemented intraoperatively as discussed herein) and optionally displays the patient-specific anatomical model on a graphical display accessible to a surgeon. Post registration, position and orientation data from the ultrasound probe are combined with signal data from the transducer(s) by the computer system to generate one or more 3-D points, which are correlated to the patient-specific anatomical model. In this manner, the 3-D points are associated with the patient-specific anatomical model in order to update, in real-time or near real-time, the position and orientation of the patient-specific anatomical model displayed on the display.
Not surprisingly, the exemplary system and methods provide a number of advantages over conventional surgical tracking systems. By way of example, the exemplary surgical tracking system is non-invasive, which lessens the potential complications and reduces patient recovery time compared to invasive surgical trackers. When using a custom shaped ultrasound probe, tracking of the anatomy in question can be simplified because the probe is configured to engage the patient's anatomy in a single position and orientation, thereby providing a fixed frame of reference for changes in 3-D position of the probe, as well as changes in 3-D position of the anatomy in question. Moreover, the exemplary system and methods are not specifically tied to any position and orientation tracking technology and can be used with any position and tracking technology including, without limitation, EM, IMU, and optical trackers.
Generally, apparatus associated with methods described herein may include computers and/or processors configured to perform such methods, as well as software and/or storage devices comprising or storing instructions configured to cause a computer or processor to perform such methods. In some example embodiments, some operations associated with some methods may be performed by two or more computers and/or processors, which may or may not be co-located. For example, some operations and/or methods may be performed by remote computers or servers, and the resulting outputs may be provided to other devices for preoperative, intraoperative, and/or postoperative use.
Although various example embodiments have been described herein in connection with specific anatomies, it will be understood that similar methods and apparatus may be utilized in connection with other anatomies, such as any joint. For example, various embodiments according to at least some aspects of the present disclosure may be used in connection with anatomical structures associated with shoulder joints, hip joints, knee joints, ankle joints, etc.
Following from the above description and invention summaries, it should be apparent to those of ordinary skill in the art that, while the methods and apparatuses herein described constitute example embodiments according to the present disclosure, it is to be understood that the scope of the disclosure contained herein is not limited to the above precise embodiments and that changes may be made without departing from the scope as defined by the following claims. Likewise, it is to be understood that it is not necessary to meet any or all of the identified advantages or objects disclosed herein in order to fall within the scope of the claims, since inherent and/or unforeseen advantages may exist even though they may not have been explicitly discussed herein.
This application claims the benefit of U.S. Provisional Patent Application No. 63/364,656, titled “METHODS AND APPARATUS FOR THREE-DIMENSIONAL RECONSTRUCTION USING MULTIPLE IMAGING MODALITIES,” filed May 13, 2022, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63364656 | May 2022 | US |