Augmented reality (AR) and navigation systems have found application in surgical settings. For example, some surgical navigation systems use pre-operative imaging of the anatomy of the patient subject to a surgical intervention, such as computer assisted tomography (CAT) imaging, magnetic resonance imaging (MRI), X-rays, etc. An AR surgical navigation system may be used to register or align the pre-operative imaging with live, intra-operative view of the anatomy. The pre-operative imaging and the live imaging may be displayed to a medical provider such as a surgeon. In some examples, the pre-operative imaging may be overlaid on live intra-operative images of the anatomy of the patient to help the medical provider plan and/or execute a surgical intervention.
Surgical navigation systems are utilized to provide surgeons with assistance in identifying precise locations for surgical applications of devices, targeted therapies, instrument or implant placement or complex procedural approaches. The benefit of surgical navigation is that it allows for information that can be utilized to improve almost any surgical intervention. A challenge in current navigation systems is the reliance on pre-operative images to reconstruct the anatomical landscape as the basis for surgical planning. The pre-operative images are static representations of the anatomy and may in some cases not align with the anatomy at the time of surgery. Moreover, as the surgical intervention occurs, changes in the anatomical landscape may occur and are not taken into account by a static pre-operative plan. Another challenge in current navigation systems is the potential for interference with the line of sight between the cameras used to capture the images of the markers disrupting the referencing and ultimately the navigation of the process as a whole. Current systems have a large footprint within the operating room (OR) and require the surgeon to look away from the surgical field. Head-mounted systems, on the other hand, have no footprint other than the head-mounted display (HMD) and use optical cameras that are essentially aligned with the surgeon's point of view.
Augmented reality and mixed reality (AR/MR) have gained increased interest in the medical field in recent years. The use of AR/MR typically employs an HMD that superimposes content to whatever the user sees. The location of where to superimpose an image can be determined by use of markers or trackers placed in the environment. This common approach to using AR/MR allows reference information in the superimposed content to be geolocated at a point in actual environment that may allow the user to access during the time of surgery. For example, if a pretreatment Computed Tomography (CT) scan of a patient is converted to a three-dimensional (3D) image and used to plan the procedure, both the 3D reconstructed image and plan can be superimposed over the patient's actual knee. This would allow the surgeon to compare the pretreatment plan to place an implant visually against the actual knee. The approach assumes the ability to register the superimposed image to the actual knee with enough precision to make the visual overlay clinically useful. If the superimposed image is overlaid on the actual anatomy with precision, it can be manipulated to allow for adjusting its position. So the pretreatment plan can be moved in a variety of directions in a manner that allows for visualizing the impact of changing the location of the initial plan.
The current approach is thus primarily limited to superimposing plans created using imaging methods like CT, Magnetic Resonance, Fluoroscopy or X-ray. The imaging modalities result in a visual or digital model of the patient's pre-operative targeted anatomy that can be used for measurement of the environment for purposes of precise location of an object. The superimposed information can be an accurate representation of areas of a patient's body. For example, a CT image is likely to reveal precise measurements of hard tissues like bone structures. Thus superimposition of a knee image reconstructed and displayed on an actual knee should match closely if not exactly, provided the knee is exposed and all other surrounding tissues have been removed. In practice, this represents a limitation for superimposing plans created by imaging methods. The need to expose the exact layers they are displaying rarely occurs. So, the position of the overlay needs to be inferred to the actual location of the anatomy to be potentially useful.
Certain details are set forth herein to provide an understanding of described embodiments of technology. However, other examples may be practiced without various of these particular details. In some instances, well-known circuits, AR/VR technology, surgical operations, control signals, timing protocols, and/or software operations have not been shown in detail in order to avoid unnecessarily obscuring the described embodiments. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.
Existing systems for use of AR in a surgical setting that involve registering a pre-operative image to live anatomy have drawbacks that limit their use. For example, pre-operative imaging or a plan based thereon becomes obsolete the moment a surgeon makes the first cut or incision, thereby changing the anatomy of the patient such that the anatomy no longer matches the pre-operative imaging. For example, in a total knee replacement, a surgeon may resect portions of the condyles and/or the patellar surface of the femur and/or the condyles of the tibia. Once the surgeon cuts the bone, the anatomy of the femur and tibia no longer match the pre-operative imaging and any plan based on the pre-operative imaging is no longer relevant to additional resections made to the anatomy. In addition, pre-operative plans need to maintain constant registration to the anatomy to accurately reflect the position of the plan to the live anatomy. Movement of the live anatomy that is not captured and corrected for can provide false positioning of the plan to the targeted anatomy. Improved methods and systems for intra-operative planning that can account for changes in the anatomy during the progression of a surgical intervention are needed.
The challenge of starting with a reconstructed image as the basis for overlay of a plan on anatomy is not just the overlay but the idea of the static nature of the plan itself in contrast to the dynamic environment that surgery is. Surgeons have used pretreatment or intra-operative imaging as a point in time snapshot that is constantly changing during surgery. So reliance on a static image for planning is inherently limited by the extent of the changes made during surgery. If a surgical plan calls for resection or reconstruction of an anatomical site, once the surgeon makes the first cut, the imaging used to plan that cut and others going forward has become potentially obviated due to the change in the anatomy as a result of the intervention.
There is a need for an ability to dynamically adjust a plan and place surgical instrumentation that adapts to the surgical environment throughout more (or the whole) of a surgical procedure. Moreover, there is a need to develop systems and methods for intra-operative planning that do not rely, or place less reliance, on pre-operative imaging.
Examples of systems and methods are described herein that utilize a headset for measurement and planning of or for a surgical environment. Examples of systems and methods described herein may further utilize a pointer in conjunction with the headset. The headset may receive position information for anatomical landmarks based on a position of the pointer. The surgical plan may be generated based on the position information. Information to guide a surgical device based on the surgical plan may be displayed using the headset.
The measurement and planning of a surgical environment may use a defined coordinate system for the purposes of locating and navigating optimal placement of surgical instruments without the need to reference or superimpose imaging methods like CT, X-ray, MRI and Fluoroscopy. Examples may utilize a headset or other system that can image the surgical environment with one or more of a variety of camera types. The headset can also employ a number of sensors including depth sensors for measurements that may be useful in understanding the position of the anatomy in the environment. The resulting camera images and measurements can be utilized to form a surgical plan and/or a reconstruction of the actual anatomy without the aid of additional imaging modalities. Markers affixed to the anatomy or elsewhere in the environment can be used in conjunction with the headset to create a coordinate system. For example, fiducials may be attached to each marker, and the coordinate system may be created based in part on the fiducials. The fiducials, may for example, be arranged in such a manner that they define a plane, and therefore a coordinate system. A map of the anatomy and/or a surgical plan may accordingly be generated and displayed in the coordinate system defined by the markers, and aligned with landmarks identified by the pointer.
To obtain position information for landmarks, the headset may track a position of a pointer relative to the marker. The movement of the pointer may be used to identify a landmark. For example, the headset may recognize that the pointer had slowed, stopped, contacted, or otherwise indicated a particular landmark. The headset may accordingly obtain the position of the landmark based on the position of the pointer when the landmark is indicated. An intra-operative plan can be generated based on the position of the landmark. Surgical guidance using the headset may be provided based on the intra-operative plan.
An advantage of examples of this approach over methods and systems utilizing superimposed pretreatment plans in some examples may be the ability to plan based on updated anatomy during the procedure compared to the pretreatment static environment displayed. Note also the ability for a surgeon or other practitioner to indicate landmarks using the headset. The headset may advantageously allow the individual performing the landmarking to move their viewpoint to access the landmark location in a desired manner. By positioning both a fixed marker and the pointer indicating the landmark in a field of view of a camera of the headset, the location information of the landmark in the coordinate system of the marker may be obtained by the headset. The surgeon or other practitioner is able to freely move the camera about (e.g., by moving their body, and therefore the headset to obtain this view). Accordingly, in some examples, the use of the headset and markers to identify relevant anatomy may improve accuracy because the anatomy identification (e.g., landmarking) may be performed from the surgeon's point of view through the headset, allowing the surgeon to move around, reduce occluding objects, and more accurately identify the anatomy. Note also that multiple landmarks may be obtained, and may be obtained from different views by the surgeon. The surgeon may be in one position when identifying a first landmark, and another position when identifying another landmark, for example. In some examples, the use of the headset and markers to identify anatomy relevant to the surgical plan may reduce or eliminate a need to rely on pre-operative imaging, which may improve accuracy as well as reducing the burden in preparing for surgery, as pre-operative imaging may be less significant.
Examples of systems and methods are described herein that utilize a marker that includes an insertion member in conjunction with a resection guide. The resection guide may have a slot that may receive either the insertion member or a resection device. Accordingly, a resection guide may include a marker with one or more fiducials. Headsets described herein may accordingly determine a position of the resection guide in an environment and may provide guidance for placement of the resection guide. The resection guide may be placed in accordance with an intra-operative plan, the marker removed, and a resection made using a resection device moving through the slot.
Examples of systems and methods are described herein that utilize a pointer in conjunction with fiducials associated with a marker affixed to at least one of a femur or a tibia to detect positions of anatomical features in proximity to a knee. A planned resection plane for a body part proximate to a knee may be generated based on the location(s) of the anatomical features. An actual resection plane may be determined based on a view of a resection guide having a marker inserted in the guide from the headset. Thus, surgical guidance using the headset may be provided to position the resection guide in a manner to align the actual resection plane with the planned resection plane.
Examples of methods and systems described herein may have particular benefits over existing systems in that a model of the patient's anatomy and/or a surgical plan may be updated in the surgical theater. For example, once a resection is made, the plane of the completed resection can be measured relative to the planned resection plane or line. For example, even if a resection is made with some deviation from the plan, the remainder of the planned resections can be adjusted accordingly based on the position of the completed resection plan or line. In existing systems, an inaccurate cut could propagate throughout the anatomy as one resection after another is based on an uncorrected model of the patient's anatomy.
Examples of systems described herein accordingly include a headset, such as the headset 102. The headset 102 may be implemented using an augmented reality headset, such as, but not limited to, a MICROSOFT HOLOLENS device, a META PLATFORM OCULUS device, and/or a MAGIC LEAP device. The headset 102 may be worn by a medical provider, such as the medical provider 116. The medical provider 116 may be a surgeon; however, in some examples the medical provider 116 may be a surgical assistant, physician's assistant, anesthesiologist, registered nurse, or other being involved in the surgical field. The headset 102 may image a field of view 120 and present a virtual environment 114 to the medical provider 116. The headset 102 may accordingly include one or more cameras, as discussed with respect to the vision system 112.
Examples of systems described herein may include one or more markers, such as the marker 110. The markers may be placed in a fixed position on or proximate a patient, such as the anatomy of the patient 104. The patient 104 may be a human, a dummy and/or model (e.g., in the case of training systems), an animal, and/or any other surgical subject. The marker 110 may be positioned at a location particular to a procedure to be performed, in some examples. While a single marker 110 is shown in
The markers may have one or more fiducials, such as the fiducials 118. The fiducials 118 may be positioned and designed to allow for the headset 102 to determine a coordinate system of the marker 110 and/or determine a position of the marker 110 in the coordinate system. For example, the fiducials 118 may be optical targets that may reflect a particular wavelength or band of wavelengths of light. The fiducials 118 may be arranged in known locations (e.g., a predetermined pattern) about the marker 110. For example, four fiducials 118 may be positioned at respective quadrants around an end of the marker 110. Additionally, each fiducial 118 may be reflective and/or otherwise patterned or shaded for identification by the headset 102. In the example shown, the fiducials 118 are spherical and/or circular. In other examples, the fiducials 118 may be any suitable shape which allow for detection and localization of the marker 110. In some examples, in this manner, by identifying the position of the fiducials 118, the headset 102 may determine the position of the marker 110 attached to the headset 102, in a sensor (e.g., camera) coordinate system, such as a simultaneous localization and mapping (SLAM) coordinate system, relative to the headset 102 from a single image. In some examples, a coordinate system may be defined by a plane created or indicated by the fiducials. The marker 110 may include one or more fixtures such as a screw, clamp, adhesive, or other fastener adapted to attach the marker 110 to the patient 104 (e.g., to a bone or other components of the patient 104). The distances of the fiducials 118 with respect to a body of the marker 110 may be known.
Generally, systems described herein may utilize at least one or more markers. The markers may be used to establish the camera coordinate system defined by the marker positions. For example, the headset 102 may be used to identify the positions (e.g., locations and orientations) of the markers, including the marker 110 of
In some examples, the headset 102 may simultaneously build a map of surroundings of the medical provider 116 in an environmental coordinate system. The headset 102 may track itself in six degrees of freedom within the map. The map may be used for rendering holograms near the medical provider 116 or other objects of interest. When two markers are present, the positions and orientations of the markers of each are used in order to compute transformation between the camera and environmental coordinate systems associated with each other for a time of capturing the single frame including the markers. In addition, the headset 102 may provide transformation from the camera coordinate system to the environmental coordinate system for each frame while tracking of the headset 102 is active.
In some examples, a medical provider may make one or more incisions in the skin, facia, or other soft tissues of a patient to reveal bony structures of the patient's anatomy. The medical provider may affix one or more markers, such as a marker 110, to the bony structures. For example, the medical provider may affix one marker to a tibia of the patient and another to the femur of the patient. For example, a tibial marker affixed to a tibia may be used to identify and/or locate a location of one or more tibial landmarks in the map. For example, a femoral marker affixed to a femur may be used to identify and/or locate one or more femoral landmarks in the map. Estimating a location of a certain portion of a body part, such as a femoral head center (hip center), in conjunction with the marker affixed to the body part, such as the femur, may use the map built by the headset 102. By tracking the position and/or orientation of a marking instrument (e.g., a pointer) in the real environment, the medical provider can have the headset generate a model of the anatomy and may update that model intra-operatively as the anatomy changes during the surgery (e.g., due to resection of bones). The marking instrument may be tracked in the environmental coordinate system established by the markers fixed to the bony structure. For example, the medical provider may have the marking instrument point to the body part of the patient's anatomy and define features thereof. In the non-limiting example of a knee replacement, a medical provider may have a marking instrument, such as a pointer, to indicate landmarks, such as features of the femur, tibia, fibula, hip, ankle, etc. In some examples, responsive to slowed or stopped motion of the marking instrument for a predetermined period, the headset may detect identification of the landmarks, and may determine positions of the landmarks in the coordinate system. The headset may automatically generate a model of the anatomy, based on determining a position of the landmark in the coordinate system, and generate an intra-operative plan based on the positions of the landmarks. As the medical provider resects bone, the anatomy of the patient changes and the surgeon can easily update the model intra-operatively by touching a newly formed feature of the bone. The surgeon then can proceed with the surgical intervention, updating the model as needed.
Note that, in examples described herein, the medical provider may mark and/or indicate features which may or may not be in an initial field of view of the headset. For example, a medical provider may be prompted to indicate a feature which is occluded from the provider's headset's current field of view. The provider may move their head such that the headset's field of view changes to include the indicated feature, and use a marking instrument (e.g., a pointer) to indicate the feature. In this manner, a greater number of features may be available for collection by the medical provider than if a fixed camera system had been used, and occluded areas unavailable. Moreover, the medical provider may move their body and/or head such that the headset is within a particular distance and/or range of the desired feature to be indicated (e.g., within 6 inches to 2 feet in some examples). This may be closer than is achievable with a fixed camera system having a view of an entire patient or operating room. Accordingly, by placing the headset in an appropriate position for each feature to be indicated, accuracy of a position determination for the feature may generally be improved. For example, a medical provider may position themselves such that a first feature is centered or positioned in some other location of their field of view, and then indicate the first feature using a marking instrument. The medical provider may move their body and/or head to another position such that a next feature is centered or positioned in some other location of their field of view, and then indicate the second feature using a marking instrument. By changing the field of view for particular feature indications, accuracy may be improved and/or it may be possible to mark otherwise occluded features.
Examples of systems described herein may include one or more pointers, such as the pointer 108. The pointer 108 may be a device which may include a stylus 122 having a tip 124. One or more fiducials 126 may be positioned in known locations (e.g., a predetermined pattern) about the stylus 122. In this manner, the pointer 108 may be trackable by the headset 102. The fiducials 126 may be analogous to the fiducials 118 and may be optical targets trackable by the headset 102. The pointer 108 may include a stylus, pointer, or other similar implement that can be easily grasped by the medical provider 116 and that includes a tip 124, that may function as a pointer marker, suitable to precisely indicate one or more anatomical features. The fiducials 126 may be affixed to the stylus 122 with one or more arms such that the fiducials 126 are spaced from one another in a known pattern. A distance of each of the fiducials 126 from the tip 124 of the stylus 122 may be known. In the example of
The systems disclosed herein may track a position of a fiducial using one or more cameras of the vision system 112. For example, the headset 102 may use a depth camera to first estimate a pose of the marker 110 or the pointer 108 and then use a greyscale or other image from the depth camera or another camera to more accurately resolve the pose of the marker or pointer 108. See, for example, the description relating to
The head-mounted display 202 may be an augmented reality device, such as a MICROSOFT HOLOLENS device, a META PLATFORM OCULUS device, and/or a MAGIC LEAP device. Accordingly, the head-mounted display 202 may allow a wearer to view a field of view in a scene in an environment around the wearer, and the head-mounted display 202 may display various information and/or objects in the scene. While described as a head-mounted display, in some examples, other augmented reality devices may additionally or instead be used, such as a mobile or cellular telephone (e.g., a smartphone), a tablet, a watch, or other electronic devices. While described as a head-mounted display, in some examples the head-mounted display 202 may be worn or carried by another part of a body.
Head-mounted displays described herein may include one or more depth cameras, such as depth camera 204. The depth camera 204 may be implemented using a time-of-flight camera or sensor which may provide depth information to portions of a scene based on a measurement of a time for a round trip of a light pulse to reflect off an object in the scene and return to the depth camera 204. In some examples, the depth camera (e.g., a time-of-flight camera) 204 may be implemented using light detection and ranging (LIDAR) or continuous wave technology. The depth camera 204 may have a viewing angle that changes with the view of the wearer of the head-mounted display 202. In some examples, the wearer of the head-mounted display 202 (e.g., the medical provider 116 of
Head-mounted displays described herein may include one or more illuminators, such as illuminator(s) 220. The illuminator(s) 220 may illuminate generally any wavelength of light, or combinations of wavelengths. In some examples, one or more illuminator(s) 220 may emit infrared light. The light emitted by one or more of the illuminator(s) 220 may be used by the depth camera 204 to obtain depth information (e.g., to measure a time for light emitted by one or more illuminator(s) 220 to reflect off an object in the scene). In some examples, one or more illuminator(s) 220 may be implemented using incoherent light sources such as light emitting diodes (LEDs). In some examples, one or more illuminator(s) 220 may be implemented using coherent light sources such as laser emitters and/or laser diodes.
Head-mounted displays described herein may include one or more additional cameras, such as camera(s) 206. The camera(s) 206 may include, for example, one or more color cameras, video cameras, greyscale cameras, or the like. In some examples, head-mounted displays described herein may include one or more additional sensors including, but not limited to, one or more accelerometers, gyroscopes, positional sensors, temperature sensors, wavelength sensors, or combinations thereof.
Head-mounted displays described herein may include one or more displays, such as the display 218. The display 218 may project an image of information and/or objects onto the scene viewed by the wearer of the head-mounted display 202. In this manner, selections may be presented to the wearer of the head-mounted display 202 and/or surgical plan information and/or guidance may be viewed by the wearer of the head-mounted display 202.
Head-mounted displays described herein may receive input from a wearer of the head-mounted display, such as the head-mounted display 202. Input may be received, for example, by sensing a direction of gaze of the wearer. In some examples, input may be received by detecting a position of a portion of the wearer (e.g., a finger may be used to select a displayed selection and/or a pointer, such as the pointer 108 of
In examples described herein, the head-mounted displays may be used during and in preparation for and/or follow-up to surgical operations. The head-mounted displays may be used to select surgical operations, to gather tracking information, to formulate operative plans, and to provide guidance based on those plans. The operative plans may be monitored and/or adjusted during the surgical operation based on anatomical changes identified during the surgical operation. In this manner, it may not be necessary to utilize pre-operative images and/or to register pre-operative images to the anatomy viewed by the medical provider during the surgical operation. Rather, the information about the anatomy used to generate the operative plan, provide surgical guidance, and/or modify the operative plan may be gathered from views of the anatomy taken by the head-mounted display of the actual anatomy. Note also that the field of view can be selected and adjusted by the medical provider wearing the headset to obtain accurate anatomical information.
Accordingly, head-mounted displays described herein may be provided with hardware, firmware, and/or software for identifying anatomy, tracking markers, fiducials, and/or pointers or other objects, developing and/or modifying intra-operative plans, and/or providing surgical guidance.
In the example, of
Examples of head-mounted displays described herein may be utilized to request anatomical identifications. For example, the head-mounted display 202 of
The medical provider may then identify the anatomy as prompted. For example, the medical provider 116 of
Examples of anatomical features which may be requested by the head-mounted display 202 and/or identified by a medical provider include, but are not limited to, hip center (e.g., collect hip center via hip rotation), knee center (e.g., collect femoral knee center, such as the center of the femoral canal), ankle center (e.g., collect medial and lateral malleoli), and/or tibia landmarks (e.g., collect five tibial proximal landmarks, which may be in addition to malleoli, and visualize tibial axis independent of femur, such as not connected at femoral knee center). Other examples include identification of a sagittal plane (e.g., medical provider to outline and/or paint Whiteside's line to estimate sagittal plane) and/or identification of condyle extremes (e.g., medical provider to paint and/or outline distal condyle surfaces).
Examples of head-mounted displays described herein may locate and/or track objects, such as objects having fiducials. Accordingly, the head-mounted display 202 may include the executable instructions for tracking position 222. The executable instructions for tracking position 222 may cause the head-mounted display 202 to locate and/or track a position of one or more pointers and/or markers. For example, the location of pointer 108 may be tracked by the head-mounted display 202. The head-mounted display 202 may track the location of the pointer 108 relative to, for example, the marker 110. Accordingly, when the medical provider indicates a location of requested anatomy with the pointer 108, the head-mounted display 202 may identify the position of that anatomy, such as by identifying a position of that anatomy relative to the marker 110 and/or any other markers. In this manner, the head-mounted display 202 may identify a location of the requested anatomy in a camera coordinate system, e.g., a SLAM coordinate system, relative to the vision system. In some examples, the marker 110 itself may be positioned at a location of particular anatomy relevant for a surgical operation. Accordingly, the position of the marker 110 may itself be used to develop intra-operative plans. Position may be tracked using the depth camera 204, light emitted from the illuminator(s) 220, and the fiducials 118 and/or fiducials 126 on the pointer 108 and/or marker 110. The position and orientation of the pointer 108 and the marker 110 may be tracked using computer vision methods such as “projective-N-point” (PnP) techniques. The head-mounted display 202 may create a relative coordinate system between the pointer 108 and the marker 110 and/or other markers used in the system 100. Thus, the head-mounted display 202 can accurately determine the locations of features of the anatomy of the patient.
Accordingly, based on a selected and/or identified or otherwise known procedure, the head-mounted display 202 may prompt the medical provider 116 to touch one or more anatomical features with the tip 124 of the pointer 108. As the medical provider 116 touches the features, the head-mounted display 202 may register the feature in a model of the patient's anatomy. When the medical provider 116 has placed the tip 124 on, adjacent to, and/or proximate the prompted anatomical feature, the medical provider 116 may indicate to the head-mounted display 202 that the tip 124 is on the feature. The indication may be provided, for example, using a particular gesture, by holding the tip 124 in a single place for a particular time, by gazing in a particular direction, and/or by pressing a button, such as a button or other indicator on the stylus 122, or other tactile input. Other input mechanisms may be used, including auditory inputs, such as speaking a word to indicate the anatomy has been identified. The head-mounted display 202 may then record the feature in a relative coordinate plane between the pointer 108 and the one or more markers 110. In some examples, the feature may be a point on the anatomy, such as the lateral or medial condyle of the femur, etc. In some examples, the feature may be a line (e.g., Whiteside's line, a line that which runs from the center of the intercondylar notch to the deepest point of the trochlear groove anteriorly). In some examples, the feature may be a surface or area of the anatomy. In the examples where the feature is a line, area, or surface, the medical provider 116 may move the tip 124 along the feature (e.g., paint the feature) and/or outline the feature, and the depth camera 204 may record the locations of the tip 124 to generate a model of the position and orientation of the feature.
In some examples, the head-mounted display 202 may provide marker visibility verification. For example, the head-mounted display 202 may present a visualization of the tracked marker, such as the marker 110 while a medical provider is pointing to requested anatomy. A visualization of the tracked pointer, such as the pointer 108, may also be provided. The medical provider may be prompted by the head-mounted display 202 to verify the one or more markers, such as the marker 110, and remain visible when the pointer 108 is being used to identify anatomy.
Examples of head-mounted displays described herein may develop intra-operative plans. Accordingly, the head-mounted display 202 may include the executable instructions for developing intra-operative plan 214. The executable instructions for developing intra-operative plan 214 may cause the head-mounted display 202 to make certain calculations for a particular surgical procedure based on the identified locations of requested anatomy. For example, one or more planes, lines, volumes, or other relevant structures may be calculated based on the location of the requested anatomy. The intra-operative plan may include one or more positions for surgical instrument(s), cut lines, locations for positioning instruments or guides, or other planning actions.
Examples of calculations that the head-mounted display 202 may make in accordance with the executable instructions for developing intra-operative plan 214 include calculation of femoral implant (e.g., to adjust translation/rotation of femoral component, while visualizing resection planes), femur/tibia implant metrics (e.g., metrics for each individual implant, such as angles/distances relative to landmarks), gap metrics (e.g., visualize flexion/extension gap or implant articulation surface gap), distal femur resection (e.g., visual guidance of resection guide to planned distal resection), flexion angle (e.g., should start at 0 degrees in full extension, such as a measure of how much the knee is flexed), and metrics beyond just distal resection depth, e.g., rotation angles, etc., during femoral implant manipulation. Other calculations and plans may be used in other examples.
In some examples, following a medical provider's identification of tibial proximal landmarks, which may be in addition to malleoli, the head-mounted display 202 may calculate and display through the display 218 a tibial axis which may be independent of the femur, and may not be connected at the femoral knee center.
In some examples, metrics for implants, such as femur and/or tibia implants, may be calculated and/or displayed, such as various angles and/or distances to place the implant relative to one or more identified anatomical features (e.g., landmarks).
In some examples, gap metrics may be calculated and/or visualized through the display 218. For example, a flexion and/or extension gap may be calculated and/or visualized, and/or an implant articulation surface gap.
Examples of head-mounted displays described herein may provide surgical guidance based on intra-operative plans. For example, the head-mounted display 202 may include the executable instructions for guiding surgical operations 216. The executable instructions for guiding surgical operations 216 may cause the head-mounted display 202 to display guidance for a medical professional conducting a surgical operation. For example, a location of a cut line, and/or location to place a guidance or surgical instrument may be displayed by the display 218 overlaid on the surgical scene. As a surgeon places surgical instruments, guidance devices or other tools, the display 218 may display any of a variety of information which may aid in the accurate placement of such tools and devices. For example, the guidance devices and/or other tools may also be tracked (e.g., may be coupled to one or more fiducials which allow their position to be determined by the head-mounted display 202). A distance between the current position of the tool and a desired position of the tool may be displayed, and guidance as to in which direction or how to move the tool may be displayed by the display 218. Additionally, the executable instructions for guiding surgical operations 216 may further cause the head-mounted display 202 to provide the guidance for the medical professional conducting the surgical operation using the speaker (not shown).
In some examples, 4-in-1 resection guidance may be provided, such as by using the display 218 planar rotation and/or translation error within a distal resection plane.
In some examples, tibial resection guidance may be provided, such as by using the display 218 resection plane angular and depth error, analogous in some examples to distal femoral resection.
Examples of head-mounted displays described herein may modify intra-operative plans based on actions occurring during a surgical procedure. For example, after a particular step in a surgical operation (e.g., a cut), the executable instructions for prompting anatomical identification 212 may prompt a medical provider to identify additional anatomical features (e.g., an end of the cut, a plane exposed by the cut, etc.). The new anatomical feature position information may be used by the executable instructions for developing intra-operative plan 214 to modify and/or develop an additional portion of the pre-operative plan based on the newly located anatomical feature.
In some examples, the head-mounted display 202 may support a femoral workflow. In an example femoral workflow, the femoral workflow may optionally be selected from among a plurality of supported workflows—e.g., by the medical provider 116 using the head-mounted display 202. The executable instructions for prompting anatomical identification 212 may prompt the medical provider 116 to use the pointer 108 to identify features such as medial/lateral epicondyles, Whiteside's line, anterior cortex (e.g., for notch checking), posterior or distal medial/lateral condyle surfaces, etc. The executable instructions for developing intra-operative plan 214 may calculate signed medial/lateral distal condyle distances, signed medial/lateral posterior condyle distances, signed anterior cortex distance, varus/valgus alignment, flexion alignment, axial rotation, and/or axial plane translation. These metrics may be visualized and/or used to guide surgical operations regarding the femur.
In some examples, head-mounted display 202 may support a tibia workflow. In an example tibia workflow, the tibia workflow may optionally be selected from among a plurality of supported workflows—e.g., by the medical provider 116 using the head-mounted display 202. The executable instructions for prompting anatomical identification 212 may prompt the medical provider 116 to use the pointer 108 to identify the medial/lateral plateau base (e.g., through outlining and/or painting), the canal center, anterior tubercle, and/or PCL attachment center. Based on the identified anatomy, the executable instructions for developing intra-operative plan 214 may calculate varus/valgus alignment, posterior slope, medial/lateral plateau depth, axial rotation, and/or axial plane translation.
In the example of
In this manner, the headset 308 may determine a location of Whiteside's line and/or an estimate of the sagittal plane in a coordinate system tracked by the headset 308. The location may be relative to locations of the marker 322 and/or marker 330. Generally, systems described herein may utilize at least two markers in order to establish a relative coordinate system defined by the markers. Positions and/or orientations of pointers, fiducials, or other objects of the markers may be determined by the headset or other systems described herein in the relative coordinate system defined by the markers. The positions and orientations of the markers of each are used in order to compute a transformation between the relative coordinate system and an environmental coordinate system associated with each other at a time associated with a frame captured.
The following is a non-limiting example of anatomical features or clinical landmarks that may be identified, along with example reasons for the identification, in a total knee replacement procedure. In other procedures on the knee or other parts of the body, other landmarks relevant to the surgical intervention may be identified. An advantage of the disclosed methods and systems is to capture landmarks easily and sufficiently accurately for their use in providing surgical guidance. Based on the location of the landmarks, surgical guidance may be provided and intra-operative plans developed and/or modified during a surgical procedure. There may not be a need, accordingly, to register or utilize pre-operative imaging for surgical guidance or localization of any features.
In the example of a total knee replacement, a medical provider may be prompted to provide femoral landmarking. Femoral landmarking may be used to gather information regarding the location and/or orientation of features that may be used in guiding the total knee replacement procedure. During a femoral landmarking procedure, a medical provider may be prompted (e.g., using a display of the headset 102 of
Tibial landmarking may also be performed during total knee replacement procedures described herein. During a tibial landmarking procedure, a medical provider may be prompted (e.g., using a display of the headset 102 of
Accordingly, systems described herein (e.g., using the headset 102 of
Systems described herein may perform any of a variety of operations and/or guidance based on the identity and location of the collected features. In the example of a total knee operation, the identified features may be used to size the implant construct, such as to size the femoral component, size the tibial component, and/or size the tibial insert thickness. In this manner, systems described herein may provide for intra-operative computing of the implant size based on location information collected during the procedure.
Another example in the context of a total knee replacement is that systems described herein may identify and/or correct limb deformity. For example, the system may identify a center of the femoral head, identify the mechanical axis of the limb, and/or identify the deformity using location information for anatomical features indicated by the medical provider. In this manner, systems described herein may provide guidance to the medical provider to correct the deformity back to the mechanical (or other desired) axis.
Another example in the context of a total knee replacement procedure is that systems described herein may define femoral and/or tibial resection angles. Systems may define these angles, for example, based on the inputs provided in the above-described identification and/or correction of limb deformity (e.g., center of the femoral head, mechanical axis of the limb). Systems (e.g., the headset 102 of
Another example in the context of a total knee replacement procedure is that femoral and/or tibial resection depths may be defined using systems described herein. For example, systems may define a femoral component thickness, tibial component thickness, and/or tibial insert thickness. Desired tibial component position and rotation may also be defined.
In this manner, medical providers may provide location information to systems described herein by indicating particular anatomical features using a tracked pointer. The systems may use the location information to calculate various parameters relevant to a procedure—e.g., limb deformity, correction, angles, and/or depths in the example of total knee replacement. Systems may then allow for the medical provider to make real-time adjustments to resections (e.g., distal femoral, proximal tibial and posterior condylar resections) to accommodate for soft tissue (e.g., collateral ligament) behavior. Systems may then provide guidance to position a final implant construct using the location information and/or calculated parameters.
Accordingly, the headset 308 may display the resection plane 414 aligned with the patient in the coordinate system used by the headset 308. The headset 308 may display errors in position and/or rotation of the resection guide 502 from a calculated intended position of the resection plane 414. For example, the resection guide 502 may have fiducials, such as fiducial 504, fiducial 506, fiducial 508, and fiducial 510. The fiducials may be positioned in a predetermined pattern relative to the resection guide 502, and in particular, relative to a resection plane that will be identified by a placement of the resection guide 502. In this manner, the headset 308 can compare a current position of the resection guide 502, and resulting projected resection plane, with the resection plane 414 calculated in the intra-operative plan, and suggest modifications.
A pre-operative plan may include the position and/or rotation of one or more implants (and their associated resection planes) and a set of landmarks identified on pre-operative imagery (CT, MRI, etc.). These landmarks can then be spatially registered with landmarks identified intra-operatively, thus defining the intended position and rotation of the implants.
As an example, after a resection is made, the surgeon can identify the cut plane, and the executable instructions can update the displayed metrics to reflect the actual cut, rather than the planned one. Based on those metrics, the surgeon may decide to alter the placement of subsequent resections. One example of the dynamic change of the plan is use of the measurements to determine an optimal approach for the procedure (e.g., placement of the implant, proper balancing of the position of the implant not just based on relationship of the hard tissue that is measured intra-operatively, but the impact of the surrounding soft tissues bearing on the location and movement of the hard tissue). This is also a disadvantage of the pre-operative imaging which tends to be able to plan based on a certain density of tissue (e.g., hard bone but not ligaments). So, a pre-operative plan determined by existing methods that is registered does not or cannot take into account the surrounding tissues that impact placement or planning of the targeted hard tissue.
In some examples, a transform 702 may be used to transform landmark coordinates 708 of content to fiducial coordinates 710 of a marker coordinate system. In this manner, the system may identify the appropriate location for the content relative to markers present on the patient. Another transform 704 may be used to transform from fiducial coordinates 710 of the marker coordinate system to camera coordinates 712 of the camera coordinate system. The data including the content in the camera coordinate system may be provided to the headset and/or used by the headset to further transform the data to world coordinates 714 in the environmental coordinate system using another transform 706. The headset may provide information (e.g., using localization, such as simultaneous localization and mapping (SLAM) techniques) about the location of the headset tracked in the environmental coordinate system. This information may be used together with the data of the content in the camera coordinate system to provide the content in the environmental coordinate system. In this manner, as the headset moves, the content may continue to be rendered in an appropriate position relative to the markers.
The transforms shown and described with reference to
For example, during landmarking of a femoral canal entry, the system may access the environmental coordinate system and a femoral coordinate system that includes the landmark coordinates 708 of femur. During landmarking of femoral points, the system may access the femoral coordinate system and the marker coordinate system that includes the fiducial coordinates 710. The marker coordinate system is based on a pointer marker. When a medical provider (e.g., a medical provider 116) has placed a tip of the pointer marker (e.g., the tip 124 of the pointer 108) at the landmark, the system may capture the landmark upon detecting that the tip of the pointer marker has been in contact with the landmark for a predetermined period. Once the landmark is captured, the medical provider may provide either an indication of acceptance of the landmark or an instruction to recapture the landmark. When the indication of acceptance has been received by the headset or computing system in communication with the headset, the femoral coordinate system and the marker coordinate system may be associated with one another based on a location of the tip of the pointer marker on a landmark in an image provided by the system. Thus, the location of the landmark may be recorded in the femoral coordinate system.
During landmarking of tibia, the system may access the marker coordinate system based on a pointer marker and a tibial coordinate system that includes the landmark coordinates 708 of tibia. During an assessment procedure (e.g., an assessment procedure with reference to
Accordingly, localization information obtained and/or maintained by headsets described herein may be used to ensure display of content in appropriate locations in the view of a medical provider. Localization information may advantageously be utilized in other ways in examples described herein. For example, localization information may be used when calculating parameters and/or identifying locations of anatomy when only one marker is present. Recall that a medical provider may identify anatomy in examples described herein using a pointer, and location may be determined in a marker coordinate system that may be defined by at least two markers. However, some measurements may be made using only a single marker. The femoral head center determination is an example of such a case. A single marker may be present and a joint may be rotated. To identify a center of rotation, location information provided by the headset may be used. In some examples, localization information may be used to flag erroneous anatomical identifications. For example, the localization information may provide an indication of real-world direction (e.g., up/down, left/right). If a medical provider identifies anatomy at a location that has an incorrect up/down and/or right/left relationship with other known anatomy, the system may display or otherwise record an error. For example, if the medical provider was prompted to identify the medial and lateral condyle surfaces of a right knee, the system may display or otherwise record an error if the received position information indicates that the medial and lateral condyle surfaces reported by the medical provider were reported on the wrong sides for the right knee.
Another example of the use of a headset 102 to identify and measure anatomical landscapes is the ability to utilize the information to inform or adjust pre-operative plans or registrations made for current surgical navigation systems. The headset 102 can be used with a common set of reference fiducials shared by a surgical navigation system. The headset 102 can be used to plan intra-operatively, separately and independent of any pre-operative or additional surgical navigation system. The results of the planning and anatomical location data can be compared to the results of the pretreatment planning process. The intra-operative data can be used to correct or adjust the pretreatment plan during the procedure. This would enable static pretreatment imaging to be used for referencing anatomical locations that cannot be seen or reached by intra-operative anatomical landmark or identification. For surgical navigation systems that guide robotic interventions, the intra-operative planning system of the present disclosure can be used to independently plan and relay the planning information to the robotic system for confirmation or correction of the robotic positioning prior to the surgical intervention. In another example, the intra-operative measurement and planning systems of the present disclosure can be used as a means to track the surgical correction of the deformity post-operatively. The data acquired intra-operatively from the live anatomy and used to plan the placement of the surgical instrument can also be stored and compared to similar live post-operative data collection. For example, ankle center measurements can be repeated post-operatively and used to compare the axial alignment over time. The same type of measurements can be done pre-operatively, so that a time-based tracking of data of certain anatomical aspects can be formed and compared. The data can be used to inform progress of healing, feedback for physical therapy, or effectiveness of the procedure both short and long term. The measurements of the live anatomy pre-operatively and post-operatively can help inform and engage the patient in the recovery process by better informing them of the corrections performed intra-operatively comparatively. Current surgical navigation systems are typically comprised of a computer workstation, fixed position stereotactic cameras, markers or sensors, and an input 3D image of the patient reconstructed from an imaging source like CT, MR or X-ray.
The objective of existing navigation systems is generally to create a plan using the 3D reconstructed (e.g., pre-operative) image of the anatomy as a guide for surgery. The image is then registered to the patient's actual anatomy using fixed position stereotactic cameras, and the marker or sensor position in the coordinate system referenced to the fixed position camera. The purpose of the navigation system is to register the image to the actual anatomy so that the coordinate system can be used to guide instrumentation to the coordinates of the registered plan from the reconstructed image.
The systems and methods of the present disclosure are distinct from these systems in a number of ways. The first of which is the computer workstation and cameras are integrated into a single headset. The cameras can be monocular, time of flight, or stereotactic. In a preferred embodiment the depth camera 306 is a continuous wave time-of-flight camera. The advantage over RGB cameras is higher reliability of marker identification. RGB cameras are typically used for lower power consumption and heat generation. RGB cameras typically measure depth by measuring a size of an image (e.g., number of pixels) of a fiducial (e.g., a planar fiducial) with a known dimension and estimating a distance from the fiducial to the camera. Such systems have the disadvantages of requiring both specialized fiducials and tend to lose tracking when the fiducials are viewed at extreme angles off-normal from the planar surface of the fiducial. Another advantage of the systems of the present disclosure over existing systems is that no input 3D (e.g., pre-operative) image of the patient reconstructed from an imaging source like CT, MRI, or X-ray is needed in some examples.
In the preferred embodiment, a monocular time-of-flight camera 204 gives both depth data and infrared images used to identify the position of a particular anatomical area of interest. In a single location, the monocular camera can track poses of the markers. By a single shot, the time-of-flight camera can capture the pose of the bone marker and the pointer marker. The depth information from the time-of-flight camera is used as an initial estimate of the location of the marker. That initial estimate is then used with greyscale image data of the marker to more accurately resolve the pose of the marker (e.g., using PnP methods). The pose of the pointer may be similarly determined. The relative positions of the pointer and marker may be determined with respect to one another as previously discussed. The combination of steps, initial pose estimate and final pose calculation allow for use of a monocular time of flight and limited computing power in the headset to accurately resolve the location of anatomical points of interest. The location of the camera in the headset can then be dynamically moved so that a new point of interest can be acquired. This is a benefit over existing systems with fixed or bulky cameras that cannot easily be moved to a new point of view and do not image the surgical field from the point of view of the surgeon. Additional captures or poses add additional reference points in the coordinate system The pose of a single anatomical location is reconciled to other poses of anatomical interest to create a relative coordinate system, the known location of each anatomical area interest to each other. The ability to move the cameras while acquiring points of interest allows for expanded access to anatomical locations over fixed mounted camera systems. Furthermore, systems of the present disclosure may have a cost benefit over existing systems. For example many existing navigation systems rely on expensive, bulky fixed cameras that may cost several hundred thousand dollars. The systems of the present disclosure may cost significantly less with comparable or better performance than existing systems.
The relative coordinate system developed and utilized in the systems of the present disclosure is differentiated over existing augmented reality headsets by the way it dynamically acquires locations of anatomical interest without the need to register the locations to a prior reconstructed image from an image source like CT, MR or X-ray. In the preferred embodiment the systems of the present disclosure use a monocular time-of-flight camera to acquire both depth and infrared images of the pose of the markers. The markers used are IR markers for greater assurance that the markers can be seen in ambient light over existing QR code markers. Once enough data points are gathered and the user is satisfied, the relative coordinate system can be used to guide instruments to the anatomical point of interest specific to that instrument or procedure. The ability of the user to move positions of the headset for better access or visibility of anatomical areas of interest unlocks that capability of the system as a whole to not rely on prior imaging sources, and instead gathers as much actual anatomical locations as needed to accurately reflect the anatomy of interest.
The menu shown in
The landmarks displayed on the augmented reality display may be navigated through in any of a variety of ways. In some examples, a medical provider, or another person, may state the name of the landmark that is to be identified. After speaking the name, e.g., “femoral head center,” the medical provider or other person may identify the landmark using a trackable pointer, such as the pointer 108 of
During the landmarking process, a medical provider may move a tip of the pointer 906 to touch each named landmark. When the medical provider has positioned the tip of the pointer 906 at the landmark, the medical provider may provide an indication that the tip is at the landmark. The indication may be, for example, keeping the tip stationary for a threshold amount of time (e.g., 1 second, 2 seconds, 3 seconds, 4 seconds). Other indications may be a spoken indication, or other type of user interface indication, such as by pressing a button in communication with the headset and/or performing a gesture. When the indication has been received by the headset or computing system in communication with the headset, an indicator 908 may be displayed in augmented reality overlaid on the anatomy. For example, the indicator 908 may appear at a tip of the pointer 906 when the landmark is indicated by the pointer 906. In some examples, an indicator may change in appearance from when it first appears to when it is established and fixed in the scene. For example, an indicator may appear in one color (e.g., blue) when the pointer tip has been in a position for one threshold amount of time. This may provide an indication that the computing system believes the medical provider to be indicating this location for the landmark. If the pointer tip remains in that position for another threshold amount of time, the indicator may appear in another color (e.g., green) to indicate the association between the landmark and that location has been stored. The indicators may be fixed to their locations on the anatomy in the view of the medical provider, such that as the medical provider changes their field of view, the indicators remain on the landmark locations of the anatomy as viewed by the medical provider through the headset.
As the landmarks are identified, additional calculations or features that utilize the identified landmarks may be made (e.g., by the headset or a computing system in communication with the headset). When sufficient landmarks are identified to position a particular feature of interest, it may be displayed in the headset view. Furthermore, axes and planes for defining a virtual surgical space may be computed based on the identified landmarks. The anatomical landmarks, the axes and/or the planes may be used by the system for surgical planning and balancing. For example, in
Using the landmarking process described referring to
To obtain the femoral landmarks, a pointer 1006, such as the pointer 906, and a femoral marker 1004 on a femur 1002 in a view of a medical provider through a headset, such as the headset 102 of
The epicondyle medial and lateral are used to determine the epicondylar axis which is used for the femoral rotational alignment. The medial and lateral sizing of the femoral component is suggested based on these digitized points. For example, as shown in
Posterior femoral condyles are used to determine the posterior condylar axis (PCA), which is used for the femoral rotational alignment. The knee should be flexed 90 degrees before acquisition of the points. For example, as shown in
Groove points of anterior and posterior trochlea may be used to determine the Anterior/Posterior (A/P) axis, which is used for the femoral rotational alignment. For example, as shown in
Distal femur lateral and medial are used to compute a level of distal resection. For example, as shown in
For example, as shown in
For example, as shown in
Tibial sulcus medial and lateral may be used as medial and lateral plateau resection references to compute a level of resection. For example, as shown in
To compute the mechanical axis of the tibia 1028, medial and lateral malleoli may be used. For example, as shown in
As described above, landmarks on the femur 1002 and the tibia 1028 may be obtained by using the pointer 1006 to address the landmarks, while the pointer 1006 and either the femoral marker 1004 or the tibial marker 1026 may be in the view of the headset. Once the femoral and tibial landmarks have been obtained, the medical provider may proceed to the assessment mode.
In some examples, a guidance visualization 1204 may be provided to guide a medical provider in adjusting the position of the tibial and/or femoral components. The guidance visualization 1204 shown in
In some embodiments, the insertion plane 1312a perpendicular to the depth of the rigid portion and a fiducial plane 1310a including the fiducials 1306 may be configured to be in parallel.
Similarly, the insertion marker 1302b of
In some examples, as previously described with regard to
A medical provider may plan the location of the resection plane 1316 by adjusting each of a pair of angles and a depth of the resection plane 1316 through an image of a view from an augmented reality headset. For example, during planning of distal resection, the system may access a marker coordinate system that includes fiducial coordinates of the resection marker 1302a and a femoral coordinate system that includes landmark coordinates (e.g., the landmark coordinates 708) of a femur; thus the system recognizes the location of the resection plane 1316 relative to the femoral marker. During planning of resections (e.g., 4-in-1) using any one of or combination of axes including posterior condylar axis (PCA), transepicondylar axis (TEA), and/or A/P axis (e.g., Whiteside's Line), the system may access a marker coordinate system that includes fiducial coordinates of the resection marker 1302a and a femoral coordinate system that includes landmark coordinates (e.g., the landmark coordinates 708) of a femur; thus the system recognizes the location of the resection plane 1316 relative to the femoral marker. During planning of proximal resection, the system may access a marker coordinate system that includes fiducial coordinates of the resection marker 1302a and a tibial coordinate system that includes landmark coordinates (e.g., the landmark coordinates 708) of tibia; thus the system recognizes the location of the resection plane 1316 relative to the tibial marker.
Once the resection planning is complete, the medical provider may be navigated to position the resection guide (e.g., the resection guide 1308a, 1308b, or 1308c) to match the planned resection plane 1316. After placing the resection guide and the resection marker 1302a is removed, resection may be performed along the slot 1314.
Accordingly, localization information obtained and/or maintained by headsets described herein based on the resection guide 1308a attached to a body part, the resection marker 1302a attached to the resection guide 1308a by having the insertion member 1304a in the slot 1314, and the landmarks of the body part, such as landmark coordinates of a femur or tibia, may be used to ensure display of the resection plane 1316 relative to the body part in appropriate locations in the view of a medical provider, without obstructing the view by the fiducials 1306 after removal of the resection marker 1302a.
In order to identify an arrangement (e.g., a plane) including fiducials of a marker, locations of fiducials of the marker may be detected.
For example,
A resection guide may be provided with a marker, such as a resection marker 1302a having fiducials 1306. For example, the insertion member 1304a of the resection marker 1302a of
In another example,
A resection guide may be provided with a marker, such as a resection marker 1302a having fiducials 1306. For example, the insertion member 1304a of the resection marker 1302a of
After the distal femoral, or tibial resection, a chamfer cut may be made around the resection surface of the femur or tibia to smooth edges. To perform the chamfer cut, 4-in-1 resection may be performed. For example,
In some embodiments, a resection guide may be specific to a knee implant system to be used. The resection guide may be provided with a marker, such as a resection marker 1302a having fiducials 1306. For example, the insertion member 1304a of the resection marker 1302a of
From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made while remaining with the scope of the claimed technology.
Examples described herein may refer to various components as “coupled” or signals as being “provided to” or “received from” certain components. It is to be understood that in some examples the components are directly coupled one to another, while in other examples the components are coupled with intervening components disposed between them. Similarly, signal may be provided directly to and/or received directly from the recited components without intervening components, but also may be provided to and/or received from the certain components through intervening components.
This application claims the benefit under 35 USC 119(e) of the earlier filing dates of U.S. Provisional Application 63/303,370, filed Jan. 26, 2022, U.S. Provisional Application 63/323,444, filed Mar. 24, 2022, and U.S. Provisional Application 63/476,854, filed Dec. 22, 2022. The aforementioned applications are all incorporated herein by reference in their entirety for any purpose.
Number | Date | Country | |
---|---|---|---|
63476854 | Dec 2022 | US | |
63323444 | Mar 2022 | US | |
63303370 | Jan 2022 | US |