One exemplary aspect of the present disclosure relates to a guidance system, a method and a device for dynamically guiding a surgical needle catheter or other medical device or instrument onto an organ or other body part of a patient for a surgical operation. In particular, one aspect of the disclosure relates to a guidance system, method and device to aid percutaneous kidney puncture.
Document US2014/0051985 A1 discloses a target finding system identifies a surgical target such as a kidney stone by disposing an emitter such as a magnetic source behind or adjacent the surgical target and employing a circuit to identify an axis to the emitter, thus defining an axis or path to the surgical target. Document WO 03/103492 A1 relates to a device for localizing an instrument or device, comprising at least one rotatable magnet producing a magnetic moment perpendicular to the axis of the device independently from said instrument or device.
Document WO 2017/120434 A1 relates to devices for guiding an instrument into a body of a patient, at a targeted point of entry and along an insertion path at a targeted insertion angle, are described herein, such as a guide for an access needle in a PCNL procedure for accessing the kidney to remove kidney stones, the devices comprising a base component, a guide assembly, and optionally an insertion mechanism. Document EP 2967411 A1 relates a surgical locator circuit identifies a surgical target such as a kidney stone by disposing an emitter such as a magnetic source behind or adjacent the surgical target and employing the circuit to identify an axis to the emitter, thus defining an axis or path to the surgical target.
These facts are disclosed in order to illustrate the technical problem addressed by the present disclosure. All references are herewith incorporated by reference herein in their entirety.
One aspect of the present disclosure relates to a guidance system for dynamically guiding a medical device or instrument such as a surgical needle catheter onto an organ to be surgically operated of a patient, comprising:
One aspect of the present disclosure also relates to a guidance system, method and device to aid the percutaneous kidney puncture.
In an embodiment, the guidance system may further comprise an inner ring corresponding to a predetermined surgically acceptable region for the catheter.
In an embodiment, the guidance system may further comprise an intermediate ring corresponding to a predetermined surgically inacceptable region for the catheter.
Another aspect of the present disclosure relates to a method for implementing the guidance system of the present application.
Another aspect of the present disclosure relates to a medical device comprising the guidance system of the present application.
Another aspect of the present disclosure relates to a method and device to aid the percutaneous kidney puncture.
One aspect of the present disclosure also relates to a guidance system for dynamically guiding a surgical needle catheter or other medical device or instrument onto an organ of a patient, which is to be surgically operated, comprising:
In an embodiment, said concentric rings comprise an inner ring corresponding to a predetermined surgically acceptable region for the surgical needle catheter.
In an embodiment, said inner ring is centered around the tip of the EM catheter and has a radius of less than or equal to 1 mm, of less than or equal to 2 mm, of less than or equal to 3 mm, of less than or equal to 5 mm, of less than or equal to 8 mm, or of less than or equal to 10 mm.
In an embodiment, said concentric rings comprise an intermediate ring corresponding to a predetermined surgically inacceptable region for the surgical needle catheter.
In an embodiment, said intermediate ring is centered around the tip of the EM catheter and has a radius of less than or equal to 25 mm, of less than or equal to 30 mm, of less than or equal to 35 mm, of less than or equal to 40 mm, of less than or equal to 45 mm, or of less than or equal to 50 mm.
In an embodiment, said concentric rings comprise a further ring having a variable diameter depending on a spatial difference between surgical needle catheter tip and the target.
In an embodiment, said further ring has a visual characteristic which is changed when the surgical needle catheter tip deviates from the target, in particular the visual characteristic being changed when the surgical needle catheter tip goes beyond the target.
In an embodiment, said visual characteristic is a colour of the further ring, a thickness of the further ring, a fill-in of the further ring, or combinations thereof.
An embodiment comprises a 3D sound interface arranged for providing spatialized sounds according to a spatial difference between surgical needle catheter tip and the target.
An embodiment comprises a vibration interface arranged for providing spatialized vibration feedback using vibration motors, according to the spatial difference between surgical needle catheter tip and target.
In an embodiment, the plurality of concentric rings are displayed around the target on the plane intersecting a tip of the EM catheter and being perpendicular to the orientation of the surgical needle catheter.
An embodiment comprises the surgical needle catheter arranged for surgical operation of the organ.
In an embodiment, said organ is the kidney and the dynamically guiding of the surgical needle catheter is for percutaneous renal access.
It is also described a medical device comprising the guidance system of any of the disclosed embodiments.
It is also disclosed a method for implementing a guidance system for dynamically guiding a surgical needle catheter onto an organ of a patient, which is to be surgically operated,
It is also disclosed non-transitory storage media including program instructions for implementing a guidance system for dynamically guiding a surgical needle catheter onto an organ to be surgically operated of a patient, the program instructions including instructions executable by a data processor to carry out any of the disclosed methods.
One of the main functions of the surgical needle catheter is to be inserted into the body, in particular a body cavity, for surgical purposes. Additionally, a surgical needle can be used to create a percutaneous path towards the target anatomical structure to be manipulated during surgery. Alternatively, another device having a tracker EM sensor and having the function of being inserted into the body for surgical purposes can also be used.
The guidance system, method and device of the present disclosure is able to easily and safely guide the percutaneous renal access (PRA), guaranteeing that anatomical structure, in particular an organ, is not accidentally perforated and responding to current surgeon's demands. The system, method and device also improves and/or optimizes the puncture planning increasing the certainty of reaching a specific target inside the kidney.
One of the exemplary aspects of the present disclosure is to provide a new system and method to aid percutaneous kidney puncture. The guidance system of one aspect the present disclosure is able to easily and safely guide PRA (percutaneous renal access), guaranteeing that any organ is not accidentally perforated and responding to current surgeon's demands. The disclosure is also able to optimize the puncture planning increasing the certainty of reaching a specific target inside the kidney.
The easily and safely guiding the of percutaneous renal access can be further specified in a set of specific phases:
Since the need of PRA has increased in recent years, the improvement in patient care and simplification of this surgical step, through the proposed phases, may lead to several exemplary non-limiting advantages:
An overview of the particular ways in which the state of the art regarding PRA is improved by the present disclosure, can be made according to the following points:
The following figures provide exemplary embodiments for illustrating the disclosure and should not be seen as limiting the scope of invention.
One exemplary aspect of the present disclosure relates to a guidance system, a method and a device for dynamically guiding a surgical needle catheter or other medical instrument onto an organ of a patient. In particular, the present disclosure relates to a guidance system, method and device to aid in the percutaneous kidney puncture.
The following pertains to motion tracking for surgical navigation, in particular electromagnetic tracking for puncture guidance, further in particular the Aurora system (NDI, Waterloo, Canada). EMT potentially can be used to guide PRA interventional procedures, since it can provide accurate tracking without the line-of sight requirements. In order to track medical devices such as needles and catheters this system comprises the following hardware devices: field generator, sensor interface units (SIU), system control unit (SCU) and electromagnetic sensors. The Aurora™ system is but one example of an EMT system, used according to the present disclosure. Although it is to be appreciated similar systems will work equally well with the disclosed technology.
In accordance with one exemplary embodiment, the SCU includes one or more processors, a graphics processing unit (GPU), memory, an audio interface for the headphones discussed hereinafter, an audio card or equivalent audio processing system and a controller for actuating any tactile feedback.
The SCU can interface with the SIUs using any known communication protocol and/or interface(s) including wired or wireless interfaces. The SCU and KidneyNAV can also be connected to one or more display unites and configured to display the visual guidance as discussed herein. Communication between the SCU, KidneyNAV and the surgeon's feedback mechanisms can be wired or wireless, such as using Bluetooth®, to send signals to the headphones and/or tactile feedback device(s) such as the vibrotactile headband. It is to further be appreciated that the SCU and KidneyNAV could be combined with virtual reality or augmented reality goggles such that the information displayed on the display could be presented in these goggles in 2D or 3D.
The planar field generator emits a low-intensity and varying electromagnetic field that establishes a working volume. When the electromagnetic sensors are placed inside this working volume, small currents are induced in the sensors. These induced voltages are measured by the SCU that calculates the sensors position and orientation. The SCU also transmits the positional data to a host computer using a serial port connector or other communication port/protocol, for subsequent processing and navigation.
The electromagnetic sensors are connected via a SIU to the SCU. This SIU works as an analog-to-digital converter and amplifier of the electrical signals from the sensors to an SCU, decreasing the possibility of electromagnetic interferences in the operating room.
The low electromagnetic field strength can safely pass through human tissue, making it an ideal system to track surgical instruments inserted inside the human body through natural orifices or small incisions.
Finally, the electromagnetic sensors can be embedded into the working surgical tools or devices. For this embodiment, one may preferably acquire two modified surgical instruments: one 18 G/180 mm Chiba needle; and one ureteral catheter with 1.1 mm diameter and 2 m length. Both incorporate an electromagnetic sensor with 5 DOF at its tip, not being able to infer the orientation about their long axis (roll axis).
The following pertains to methods. In particular, the following pertains to EMT Navigation. The introduction of EMT navigation, for example the Aurora system, in the PRA workflow will modify the first two PCNL surgical stages: (a) the trans-urethral catheter placement and (b) the percutaneous puncture.
The following pertains to trans-urethral catheter placement. On the first surgical step, an Uretero-Reno-Fiberscope Flex-X™ from Karl Storz or comparable device is trans-urethrally placed from the urethra towards the desired renal calyx.
In contrast to the currently used technology, the catheter is guided towards the anatomic target using the Flex-X™ camera without requiring other medical imaging modalities. Furthermore, since Flex-X has a working channel of 1.2 mm, it allows the integration of a positioning and orientation electromagnetic sensor with six DOFs (Degrees of Freedom) at its tip from Aurora motion tracking system (NDI, Waterloo, Canada). Here the electromagnetic sensor, located at the Flex-X™ tip, acts as an anatomic target locator, operating as a GPS (global position system) for the puncture site.
According to Flex-X™ tip orientation it is possible to place the sensor in the desired calyx, where the calculi target is located, allowing the surgeon to choose the best virtual trajectory for the percutaneous puncture.
The following pertains to puncture stage. On the second surgical step, a virtual trajectory will be determined by the relative orientation and position differences retrieved in real-time by both needle and catheter EMT sensors.
This virtual trajectory display will be used to confirm that the catheter and needle are parallel aligned. If necessary, the surgeon can redefine the catheter orientation, and the virtual trajectory will be real-time updated. This procedure provides constant real-time positioning feedback (beep sound and/or 3D representation) to the surgeon, allowing the surgeon to accomplish a perfect orientation of the needle at all times, even in the presence of anatomical changes, such as tract dilatation, respiratory movements and needle deflections, among others.
The beep sound was generated by asynchronously and repetitively playing a MP3 or comparable sound file with 0.15 seconds of duration. This sound was played with a frequency calculated with the common linear equation: y=mx+b. The slope m of such equation was given by the distance between the catheter and needle EMT sensors. The embodiment includes that the frequency should increase when the needle tip is close to the catheter tip. The x and b were experimentally calculated (x=17 and b=60), by moving one sensor towards and away from another and by qualitatively evaluate the sound feedback, although other values are possible.
Concerning this new tracking disclosure,
The following pertains to Animal Preparation. This EMT approach was tested using different female pigs (Sus scrofus domesticus) with various weights (25-35 Kg). Before surgery, the animals were fed with liquids for 3 days and then restrained from food (24 hours) and water (6 hours) before the surgical tests.
All procedures were carried out with the pigs under general anesthesia, with 5.0 mm endotracheal intubation and mechanical ventilation. Pre-anesthesia medication consisted of an intramuscular injection of 32 mg/mL azaperone, reconstituted with 1 mg/mL midazolam with a dose range of 0.15-0.20 mL/kg.
The venous access was obtained through an intravenous line placed at the marginal ear vein. The anesthesia was induced with 3 μg/kg fentanyl, 10 mg/kg thiopental sodium, and 1 mg/kg vecuronium. It was maintained with 1.5% to 2.0% of sevoflurane and a perfusion of 1 mg/kg per hour of vecuronium. All pigs received an intramuscular injection of 1 g ceftriaxone before the tests beginning.
The following pertains to Experiments. In vitro and in vivo experiments were performed to evaluate the accuracy and performance of the EMT framework for PRA. Ex vivo tests aimed to understand and quantify the Aurora system technical characteristics, such as precision, accuracy and critical system problems e.g., electromagnetic interferences. On the other hand, in vivo animal trials, with more dynamic characteristics, aimed to define and evaluate the surgical setup, planning and puncture time, system reliability and efficiency for PRA.
The following pertains to laboratory trials, in particular electromagnetic interferences. The purpose of a test was to investigate error sources that might compromise the EMT accuracy during surgical navigation.
This test starts by putting the field generator on a position arm which offers flexible setup options around an object of interest, e.g. abdominal phantom placed in an electromagnetic free environment. Then, both the needle and catheter sensors, adjacently fixed to each other, were moved randomly along the working volume. The positional difference between both sensors was transmitted and stored to a host computer. In absence of electromagnetic interference, the difference between both sensors should be a constant with negligible variances.
Because one aims to quantify the mean accuracy within this navigation volume, different surgical tools made with different material were placed inside the navigation volume and in the vicinity of the electromagnetic sensors. For each material, one compared the sensors positional difference with the values acquired when any electromagnetic interference exists. The following materials were evaluated due to their usage during PCNL:
The following pertains to Animal Trials. The animal experimental studies were approved by the ethical review boards of Minho University, Braga (Portugal). The animal was monitored by a veterinary anesthesiologists throughout the study.
This experiment starts by placing the pig in supine position. This first stage is used to identify the ureteral orifices of both kidneys using a rigid cystoscope. Then, ureterorenoscopies were performed bilaterally. An ureterorenoscope with 1.2 mm working channel allowed to put the ureteral catheter into the desired puncture site.
After positioning the ureterorenoscope at the puncture site, resorting to a direct video view, the surgeon inserted the needle into the calyceal fornix by the following actions:
The percutaneous punctures were performed for each pig at the ureter half-way between the kidney and the urinary bladder and in renal calyces in order to evaluate the puncture location influence.
The following pertains to Outcome Measurements. The following surgical parameters were evaluated in order to ascertain if the proposed tracking solution confers any advantage to the surgeon performing PCNL:
The experiments were performed by an expert surgeon and a resident in order to avoid a supposed bias related to surgeon ability. Furthermore, the puncture location was also analyzed as a variable influencing the above outcomes.
The following pertains to Results, in particular of Laboratory Trials, in particular of Electromagnetic Interferences. The needle and catheter were adjacently placed with 10 millimeters distance between them. No relevant interferences were found when using stainless steel, titanium or tungsten carbide. In these cases, the maximum error was not significant (<0.2 mm). The catheter can be placed inside the ureterenoscope or cytoscope working channel without losing tracking accuracy.
When using mild steel or aluminum instruments one found that the error increase proportionally with the distance between the sensors and the ferromagnetic material. Maximum errors of 8 and 15 mm were found when maneuvering EMT sensors in the working volume periphery and a mild metal or aluminum, respectively, were placed in the middle of the working volume. When aluminum tools, e.g. forceps or scissors were manipulated ˜7 cm away from the electromagnetic sensors, the maximum error was less than 1 mm.
The following pertains to Animal Trial. Overall 24 punctures were successfully performed without any complications: 12 in middle ureter and 12 in the kidney calyx (lower, middle or upper kidney calix).
Table 1 summarizes measured outcomes for whole procedures. Planning time was longer for the ureter case than the kidney (median 15 versus 13 seconds, range 14-18 versus 11-16; p=0.1).
Likewise, time to achieve ureteral puncture was significantly longer than kidney puncture, requiring 51 (range 45-67) and 19 (range 14-45) seconds (p<0.01), respectively. Two attempts were needed to carry out the ureteral puncture, contrasting with a single attempt for the kidney (p<0.05). When comparing the puncture time, planning time, number of attempts and final distance (Table 2), regarding the percutaneous renal access for the upper, middle and lower calyx, one achieved non-significant differences (p>0.05).
When results from experts and residents are analyzed independently (Table 3), one verifies that, despite non-significant statistical differences (p>0.05), there was a slight tendency of higher puncture and planning times, as well as, a great number of attempts for residents.
Computer navigation systems, based in EMT technologies, are an attractive research area and have been suggested for different surgical procedures [9]. From the PRA point of view, it was performed several in vitro, ex vivo and in vivo experiments to evaluate the efficiency of the KidneyNav framework, working together with EMT Aurora system.
The following pertains to the Aurora System. The great advantage of Aurora over optical systems, such Polaris, was the ability to track small EMT sensors inside the human body without any line-of-sight requirements [1].
The disclosure preferably demands endoscopic imaging for real-time monitoring of the puncture target and two EMT sensors. The ureteral catheter and needle, both integrating an Aurora EMT sensor at its tip, are able to retrieve in real-time the position and orientation. The catheter remained associated to the puncture target (worked as a 3D real-time locator) and was permanently monitored by the EMT sensor and the ureterorenoscope video camera. Therefore, it followed in real-time all the anatomic tissues deformations and movements—originated by the respiratory cycle and also by those induced to the patient. The surgeon inserted the needle guided by the virtual puncture path displayed in the KidneyNav interface.
An important proof-of-concept step was also achieved by succeeding in performing a direct ureteral puncture, even though the procedure took significantly more time, due to the ureteral movements, ureter small diameter and soft consistence, which made the needle glide on its surface. Even though these preliminary results provide prospective paths for other applications (e.g. Percutaneous Ureteral Lithotripsy), the main objective was to further corroborate the efficiency of the purposed puncture method in a small target cavity.
Interesting of note, no difference in operator skill was found in performing the puncture. Whereby, it is reasonable to speculate that this tracking solution may reduce the number of cases needed to perform an appropriate collecting system access and make it easier. Specific literature reports that the learning curve completion for PCNL surgical competence around 60 cases. Considering the kidney access one of the most challenges phases, in this study a resident achieved the same skill level of an expert surgeon with only twelve cases.
The safety efficacy of different surgical positions for accessing the collecting system has been a controversial issue, with currently no established best practice consensus. The use of a real-time 3D trajectory proposed in this work may broaden the use of supine position for the whole PCNL procedure. In this case, the surgeon does not need to reposition the patient (decreasing surgery time in about 30-40 minutes) and may improve levels of comfort for both patient and surgeon as described in the literature. On the other hand, when the patient is repositioned, there is a reduced risk of access dislodging, since the catheter remains permanently monitored by the EMT sensor and the ureterorenoscope camera.
Medical imaging assistance to puncture commonly requires approximately 10 minutes, often guided by X-Ray based imaging and in vitro conditions. Comparing the related results, one has achieved a puncture time improvements between 75 and 85% without any X-Ray need.
Due to its advantages, the Aurora system has been extensively tested in recent years in different clinical and nonclinical environments [1]. Different works have reported errors of 0.71±0.43, with a maximum 3D root mean square positional accuracy of 2.96 mm. Although these values are higher than the Polaris accuracy, it remains highly suitable for PRA purposes [11].
Although Yaniv et al. [1] reported that electromagnetic systems may be susceptible to environment interference in the operating room, the techniques disclosed herein did not experience any kind of interference that could tamper the tracking information. By evaluating the impact of the surgical instruments, composed by different metals (aluminum, stainless steel, titanium, tungsten carbide and mild steel), one creates a more comprehensive list of requirements when using an EMT system in the operating room. Results show that only mild steel or aluminum can influence the error-proneness of EMT sensors. However, this may not represent a problem since mild steel or aluminum instruments have been replaced by stainless steel ones [1]. But, in order to guarantee that the system accuracy is not degraded, aluminum or mild metal should possibly not be used during the surgical procedure. At least, they should not be placed inside the working volume while the puncture is being performed.
Other important evaluation data was the intrusiveness of such modality. In contrast to other navigation frameworks, it will not increase the procedural time because the proposed system does not have a large setup and does not require additional steps like immobilization of the patient, preparation of the hardware, registration setup, or initializing of navigation components [1].
The motion tracking field generator should be placed on the surgical stretcher as near as possible to the kidney abdominal area (or other area being operated upon) and with an appropriately orientation to minimize the probability of interference distortions. All other possible electromagnetic disturbance sources, such as, cellphones and usual operating room equipment should kept at least 1.5 meters away from the working volume. The KidneyNav interface will advise when some possible interference exists or if the instruments are being maneuvered at the limits of the working volume.
When compared to the Polaris, the Aurora system can use wires to transmit the information between the EMT sensors and control unit and a field generator positioned close to the interventional area. However, these did not restrict the access to the abdominal area, not being a limiting factor in any of the experiments. Since sensors are placed inside the human body during all operation, there was no need for a registration and calibration procedure.
Hereupon, the proposed solution may be the simple and easy way to select and follow the correct puncture path, as well as acquire the required skill to perform PCNL regardless of calculus site, large or multiple renal calculi, or an ectopic or malformed kidney.
The following pertains to multi-sensorial guidance interface. Visual interfaces have been proposed over the years for biomedical applications covering the diagnostic, planning and guidance of several surgical procedures. Currently available visual interfaces, aid surgeons throughout the entire surgical procedure, reducing the risks and possible unknowns [12].
Although new algorithms and registration techniques have been explored to link virtual and real worlds, the interpretation of 2D images or 3D reconstructions are still a challenging task. For intraoperative complex procedures, visual interfaces may only provide good guidance capabilities for specific points of view. Often, surgeon's skills and expertise strongly affect the surgical outcome.
In addition to the information received through our sight, audio and tactile information are nowadays becoming commonplace ways of transmitting information. Hearing and touch are the second and third major human senses, respectively, being two promising and unique alternatives or complementary modalities for visual systems. Consequently, it allows the development of innovative hand- and eyes-free interfaces.
The aspect of using new ways of feedback for puncture guidance during PRA, concerns the ability to create accurate and precise localization of the needle tip with respect to the anatomical target.
The following pertains to audio feedback systems. First audio feedback methodologies have been explored due to their ability to create, process and localize sounds from complex data in a 3D space. Since computational requirements to generate audio are much smaller than for 3D graphics, these auditory interfaces were created in order to overcome technological limitations, such as limited real-time refresh rates, image poor resolution and rendering capabilities.
Nowadays, audio feedback is an attractive area of exploration for a wide range of medical or nonmedical applications [13]. Due to the ability of surrounding the listener with sounds at specific locations, sound applications have emerged to create an immersive environments for computer games [13], warning systems for civil aircraft [13], flight and military simulations, guidance interfaces to blind people, night vision systems, airplane cockpit, guidance to athletes, augmented reality systems, perceptual representation of biomedical data and heart rate monitors [12].
To the best of our knowledge, audio feedback for computer aided surgery have only been only explored by four groups [12]. Preliminary works were reported by Weber et al. [12], describing an audio system to guide a biopsy needle when perforating a gelatin phantom. Although they describe an application with great potential, quantitative or qualitative results are not reported.
Another audio feedback system, presented by Cho et al., guides the surgeon to a cochleostomy location. The authors generate warning tones when an optical tracked drill is closer to the target (tones of 300 Hz) or reaching the target (tones of 900 Hz).
From the analyzed literature, audio feedback advantages include faster processing data, high temporal resolution, parallel data streams and improves the degree of focus on the task at hand. Moreover, it creates an effective way to overcome the visual overload from complex data and due to its omnidirectionality, i.e., allows perceptions from any point in space without occlusions.
Low spatial resolution and perception, sound interferences, dependence of user are the main shortcomings [12] of the above techniques.
The following pertains to vibration feedback systems. Similar to audio feedback systems, the usage of vibrotactile feedback has also been described in literature. For instance, vibrotactile has been reported as useful in improving awareness of critical events such driver responses, spatial guidance in pedestrian navigation, alert systems for blind or visually impaired persons, gesture guidance, human-computer interaction improving realism with tactile display interfaces, immersive sensations in computer games, body posture improvements (Janssen et al., 2010), and assistance in rehabilitation.
Hence, various innovative and disruptive devices were proposed for being attached for different human parts, e.g., head [14], fingers, forearm, hand, upper body, tongue and foot.
The head has been the most preferable site for vibration wearables (e.g., headbands [14], headphones and glasses). They have been studied in various environments, because they do not restrict the users maneuverability (e.g., devices used in fingers, forearm, and hand), verbal communication (e.g., devices used in the tongue) or touch sensation (e.g. gloves).
Even being an preferred place for receiving feedback, some authors warned that the head sensitivity is not the same throughout whole area. Myles et al. and Weber et al. study and evaluate the sensibility of head surface. Both studies stated that vibration sensitivity is different for different head locations, where the crown of the scalp was reported as the least sensitive to vibration stimuli relative to areas close to the temples, forehead, and back of the head (most sensitive area).
Even ergonomics, positional efficiency and accuracy of a vibrotactile headband has already been studied [14], pattern codification for guidance, the preferred number of actuators and testing in real situations are still needed.
The following pertains to audio and vibration feedback for PRA. From the reported applications, it is clearly seen that audio or vibrotactile feedback can considerably highlight 3D orientation and guidance, decreasing the dependence of visual faculties and improving insight into virtual data. They often improve perception capabilities, which may be altered due to continued procedures, fatigue, inaccurate insight and decrease of attention.
When compared to other medical technologies routinely used for guidance (e.g. motion tracking systems, robotic devices, improved surgical tools, imaging systems), audio or vibrotactile feedback are relatively unexplored. The wider acceptance of such modalities will mostly depend on the quality and quantity of transmitted information, but also if the user can effectively learn how to use it.
In addition to the 3D feedback provided from the KidneyNav interface, this work explores the potentiality of new guidance interfaces including an improved 2D visual interface (from now on referred as NeedleView), a vibrotactile headband and 7.1 headphones able of generating 3D directional sounds.
The following pertains to methods, in particular to an overview. In this section, we propose the application of a multisensorial feedback platform combined with the previous described 3D view of the KidneyNav to develop a more intuitive guidance system. Upon each occurrence of the needle tip deviating from the path to reach the target, a set of audio and/or vibration signals will be generated and transmitted to the surgeon.
Such information will be perceived from a vibrotactile headband, 7.1 headphones and the NeedleView, that will aware and guide the surgeon towards a preferred path (FIG. 3).
The following pertains to NeedleView. Sight devices used to support instruments aligning (e.g. weapons, airplanes, telescopes, etc.) were prior art to the disclosure of the NeedleView.
Once the needle is aligned in the right position, a green label showing the distance to the target will be displayed to the user. If any deviation occurs, a red label will be shown. Likewise any sight, the needle tip will reach accurately the anatomical target if the needle is inserted along the path without any deviation. Different color disks (white, green, yellow and red in
The blue sphere is drawn according to a method that projects a ray in a 3D plane (
The NeedleView represents graphically the intersection of the ray {right arrow over (RN)} with P at a projection point p′N(x, y, z). When pT(x, y, z)=p′N(x, y, z) the user is following the correct trajectory.
Any point pRay(x, y, z) along {right arrow over (RN)} can be calculated according to (Equation 1):
pRay(x,y,z)=pN(x,y,z)+t×{right arrow over (v)}N(x,y,z) Equation 1
where t is a free constant that gives different points away from pN(x, y, z). p′N(x, y, z) is calculated by solving this t parameter (Equation 2).
{right arrow over (p)}N(x, y, z) is a vector defined between pN(x, y, z) and pT(x, y, z); θ is the angle between {right arrow over (p)}N(x, y, z) and {right arrow over (v)}T(x, y, z); and α is the angle between {right arrow over (v)}N(x, y, z) and {right arrow over (v)}T(x, y, z). Finally, by solving Equation 2 in Equation 1, p′N(x, y, z) is given by Equation 3:
Note that if dot({right arrow over (v)}N(x, y, z), {right arrow over (v)}T(x, y, z))=0, the needle orientation and target plane are perpendicular. To avoid such cases, at the beginning of the puncture and after performing an initial orientation, {right arrow over (v)}T(x, y, z) is automatically set as {right arrow over (v)}N(x, y, z).
The following pertains to audio feedback. 3D audio feedback was obtained by creating positional sounds varying one or more of the following characteristics: pitch (sound frequency), loudness (sound intensity) and playing location (sound source with a particular 3D location).
The following pertains to audio channels. According to the number of source sounds it is possible to classify sound systems as mono (1 discrete audio channel), stereo (2 discrete audio channels) and surround (N audio channels).
A mono system only produce sounds from a single source, being not able to transmit surround information. Stereo systems are able to reproduce sound from two independent sound sources, placed at the left and right of the listener. By changing the gain of each channel, it is possible to notice sound in the line between the left and right channel. Common methods to produce 3D sounds based on stereo systems are based on modifications of audio amplitude or on the delaying the arrival of the sound into the listener. Lastly, true surround sound is created by placing sound sources anywhere in 3D space. The spatial sound resolution is dependent in the number of sound sources surrounding the listener.
This work makes use of 7.1 headphones for producing surrounding audio: 7 directional channels left, center, right, left surround, right surround, left rear surround and right rear surround and 1 subwoofer that enhances low frequencies. These headphones are able to produce spatialized sound in the horizontal listener plane. Still, the reproduction of elevation sounds is still limited.
The following pertains to Audio APIs. A 3D audio space can be created using an audio API. The most known are OpenAL and EAX (Environmental Audio Extensions) from Creative Technology, Ltd. and DirectSound3D produced by Microsoft. EAX works as an audio extension for OpenAL and DirectSound3D and implements different audio effects (e.g. echo, reverberation, distortion, occlusions, exclusions, obstructions). Therefore, it cannot be used, by itself, to create a 3D sound world. On the other hand, DirectSound3D and OpenAL includes common functionalities [13]:
The following pertains to 3D audio world. The audio feedback preferably follows the same strategy as the NeedleView. The aspect of using audio to correct needle orientation is based on creating and positioning different audio sources in a 3D space around a centered listener (
Sound from different positions will reach the listener with different directions. By internally analyzing this direction, the listener will be able to ascertain the corrections to be made. The error at the horizontal plane P of the NeedleView are used to activate or deactivate the playing sources.
Since the reliability of audio spatialization is dependent on the number of sources, three different configurations were tested (
The following pertains to positional feedback. Errors from the preferred trajectory are used to set different audio buffers and to activate/deactivate audio sources from where the sound will be emanated.
Since changes in the needle orientation are correlated with changes in the listened sound, three different strategies were implemented to accurately alert and guide the listener during needle insertion. All of sound strategies are based on the variation of the sound pitch, loudness and source location.
The following pertains to a first sound strategy—SS1. The audio loudness is calculated according an error function, with respect to the preferred trajectory. When the needle is following the correct path, i.e., the target can be reached with an accuracy less Terr mm, no sound will be produced (loudness is 0). As shown in
Derr is calculated through the Euclidean distance from the vertical and horizontal errors and is used to control the source loudness gain from 0 (no sound is listened) to 1 (maximum sound output). The loudness was controlled using a step function (
Each sound source played a sinusoid signal with frequency Sf, phase Sp, duration Sd and a sleep interval between each tone Ssleep.
Ssleep was proportionally to the distance to reach the target. Being maxSleep and minSleep the maximum and minimum allowable sleep intervals, respectively, and maxdist and mindist the maximum and minimum affecting distances, respectively, Ssleep was calculated according to Equation 4.
The following pertains to a second sound strategy—SS2. Same strategy as SS1, but now instead of not playing any sound when the needle is following the correct path (with an accuracy less Terr mm), a sound with distinct frequency S2f is playing by all sound sources.
The following pertains to a third sound strategy—SS3. From literature, it is known that horizontal and vertical sounds are well discriminated, but it is more difficult to differentiate between front and back sounds. Efficiencies of about 50% are commonly reported [13].
Due to this shortcomings, this third strategy tries to improve front-back resolution by adding distinguish frequency tones, where front or back sources are playing. Two frequencies were used: frequencies Sffront for sources placed from 0° to 180° degrees; and Sfback for sources placed from 181° to 359° degrees. The frequency gap is used to rapidly ascertain if the needle tip should be moved to the front or back for orientation correction. With such differences, one expects to increase the localization performance, relative to the first strategy (SS1 and SS2).
The following pertains to a fourth sound strategy—SS4. In order to improve and enhance the spatial sound resolution, SS3 was further modified by introducing an intermittent sound at the four cardinal sources. These sound sources will be playing in an alternate mode two sinusoid waves: one with the parameters discussed at SS1 (Sf, Sp, Ssleep and Sd) and another with distinct frequency S4f and duration S4d.
For all strategies, when the needle tip reaches the target, a message sound is generated (e.g. “Target Achieved”). Here, the surgeon must analyze the ureterenoscope video to inspect if the needle is near the target.
The following pertains to 3D vibrotactile feedback. In addition to the NeedleView and 3D sound, this work also explored vibrotactile sensation for needle guidance. Due to the desirable site of the head for providing feedback [14], one can chose to manufacture a headband with multiple actuators that vibrate according to the needle spatial errors with regards to the punctured target.
Different actuators are commercially available with particular technical specifications, mainly in terms of vibration intensity and size. Miniature loudspeakers, electromagnetic alarm buzzers and coin motors are routinely used.
When compared to other solutions, coin motors offers low cost, small size, low voltage and reduced noise devices, being used in routinely devices, e.g., mobile phones. Therefore, coin vibration actuators 308-100 Pico Vibe were used to deliver this kind of feedback. Table 4 show important actuator manufacturer specifications.
Based on a literature research [14], 8 motors were chosen and placed at the 8 cardinal points (equally placed around the head).
Four control strategies were implemented and tested. Each strategy is similar to the ones already described for the 3D sound interface (SS1, SS2, SS3 and SS4), but now a coin motor will vibrate instead of playing a sound source.
The sound loudness, Ssleep and distance to the target are now transmitted from the KidneyNav to the Arduino Uno platform wirelessly via a Bluetooth® connection. The Arduino, based on the Atmega328P microcontroller, is responsible for interpreting and processing the received information and activating/deactivating the respective actuators.
The sound loudness (values from 0 to 1) is used to control the vibration intensity Vi, as a percentage of the maximum possible vibration. The vibration sensation is achieved by individually controlling the supply voltage using a square wave signal generated by pulse width modulation (PWM).
Each motor was further connected to a 3.3 V pin at the Arduino. A diode was reversely connected to the motor to protect the microcontroller against voltage spikes. 8 transistors 2N2222 were used to assure high/low current outputs, to activate/deactivate each motor.
The following pertains to experiments. Different experiments were performed in order to evaluate the accuracy and acuity of each interface in an individual or combined way.
The following pertains to sound parameter settings. The sound parameters, such as frequency and duration were found empirically by individually playing and adjusting each source parameters with manual control. To this extent 16 participants listened sounds that were created with sinusoids which frequency vary from 300 Hz to 1200 Hz. In the end, they indicate the frequency that they were most comfortable with. Based on such results, one found that the preferred values for the first and second sound strategies (SS1 and SS2) were:
When evaluating the SS3 strategy, one found that front and back sounds can be well discriminated when Sffront=690 Hz while Sfback=490 Hz.
Finally, a sinusoidal sound, with S4f=1200 Hz and S4d=25 ms, was introduced when testing the SS4. Moreover, the frequency of the sources with positions at the most left, right, up and down were changed to 650 Hz, 650 Hz, 730 Hz and 460 Hz, respectively. The other sources remained the same Sffront and Sfback frequencies.
One found that the target can be reached accurately when Terr=2 mm. SS2 was discarded for testing because all users prefer the SS1 approach.
It should be highlighted that the user can manually control the headphones volume.
The following pertains to localization accuracy test. A set of localization experiments were firstly performed to determine whether a person is able to perceive the direction of random sources or vibration stimulus.
An audio stimuli was presented to the listener with Razer Tiamat 7.1 headphones. This headphones have 10 discrete drivers (5 for each ear) composed by neodymium magnets with 30 mm of diameter and a frequency response between 20 Hz and 20.000 Hz. They were connected to a PC via a 7.1 Surround Sound-enabled Sound Card.
On the other hand, the developed headband was used to create a vibrotactile feedback.
Before any experiment, a brief demonstration was given to each participant and they were allowed to test any sensorial feedback up to 2 minutes, so that they become comfortable and familiarized with the system.
During either audio or vibration experiments, each time the user was ready, a sound or a vibration actuator was activated randomly from a set of possible locations at the same distance from the listener.
The chosen strategy for starting the experiment was also determined randomly.
Each sound strategy (SS1, SS3 and SS4) was tested for all the three spatial configurations. In contrast, the vibrotactile feedback was tested in a single distribution. Each exercise was repeated 16 times.
The users had up to 2 seconds to report the location of the perceived sound or vibration, by marking the perceived location in a printed sheet.
In total 31 volunteers participated: 26 medical students and 5 surgeon doctors. During experiments, they had no feedback about the correctness of their answers.
The aim of this test was to select the best audio spatial configuration, sound strategy and finally compare audio against vibration as possible feedback systems.
The following pertains to phantom test. Despite of the accuracy of detecting single vibration or audio sources it is important to test the efficiency of all interfaces for needle guidance. To this extent, a phantom study was performed to test and evaluate the most valuable feedback interface. The following configurations were tested in a phantom box:
The phantom box was made of wood without any ferromagnetic material to avoid interferences that might reduce the precision of the tracking device. A sponge material with 8 cm of thickness was used to simulate the skin, offering some resistance, low deformability and opaque texture when inserting the needle.
A small plastic arm was attached to one side of the phantom with a 10-15 cm distance from the superficial sponge material. It was used to hold the catheter sensor at different locations inside the phantom box.
When using only vibration or audio, the participants were blinded throughout all planning and puncture procedure. An audio message “target achieved” and a vibration pattern (all motors vibrate at the same time) were used to aware the participants when they reach the target (puncture success).
Puncture success was accomplished when the root mean square distance was less than 3 mm. The planning and puncture time were recorded. Needle position was then verified using a webcam showing the phantom interior. The degree of the needle tip divergence was calculated, during whole insertion procedure, by storing the root mean square distance between the real and virtual path.
The main objective was to assess whether the guidance approach increases the procedure efficiency, in terms of the duration, mean velocity, mean and maximum error.
The testing order was randomized to reduce target position-related familiarization and learning issues. Otherwise, the last guidance aspect will had an advantage over the first technique due to experience performing the puncture.
Different medical professionals with diverse expert degrees performed three times each strategy. These tests were performed by 2 different groups: naive (n=56 without any or less than 2 years of medical experience) and expert (n=15 with more than 2 years performing minimally invasive surgeries). At the beginning of each experiment the participant was allowed to practice by reorientation the needle in the air and by performing up to 2 puncture attempts. This data is not included for the statistical analysis.
At the end of each test, the participant filled a questionnaire to score each approach according to the following questions:
The following pertains to animal trials. The animal trials were performed as already described. In particular, the following pertains to results, further in particular of a localization accuracy test. The total duration of each experiment was about 10 minutes.
When comparing the different sound strategies and audio spatial worlds, best results were achieved when using 12 audio sources together with SS4 strategy (SS4-12SC,
SS4 was the best sound strategy with significant statistical differences (two-way ANOVA—see attachment 1 for more detail) when compared to SS1 (p<0.0001) and SS3 (p<0.05). SS4 was followed by SS3 that also shows significant statistical differences with SS1 (p<0.01). Finally, worst results were obtained with SS1. Regarding the audio world configuration, 12SC was the best one with significant statistical differences when compared to 8SC (p<0.0001) and 16SC (p<0.0001). It was followed by 8SC with also significant statistical differences with 16SC (p<0.0001). Finally, worst results were obtained with SS1.
When listening the various sources configured with SS1, the user can easily separate left from right. However, only 62% of times he was able to identify if the sound is created at the front or back sources, creating high standard deviations between users (
From the boxplot analysis (
By introducing Sffront and Sfback in SS3 and SS4 strategies, the user was able to improve its ability of detecting front or back sound to 87% of times.
By adding S4f and S4d to SS4, the user was able to accurately and promptly distinguish the sources placed at the four cardinal points (0°, 90°, 180° and 270°) when compared to SS1 or SS3 (71% vs. 90.5%).
Finally, vibration results were the best ones. The participants can easily and accurately identify vibration sources 91.1±3.6% of times with an average angulation error of 8.0° degrees. Significant statistical differences (p<0.0001) were found when compared to the best audio configuration (SS4-12SC).
The following pertains to phantom test. In terms of the phantom test, every subject reached the correct target in their first attempt.
On the other hand, audio or vibration alone were the ones that took longer times.
Planning significant differences between naive and expert groups were only found when using audio feedback (p<0.01). Puncture significant differences between both groups were found when inserting the needle under 3D view (p<0.05). 3D, audio and vibration are the methodologies where participants show quite range of times, especially for planning. In the naive group, planning time was minor when using the NeedleView alone (average value of 4.5±1.5 s). Higher times were achieved using the Audio feedback (21.3±15.1 s). In terms of the puncture step, best results were achieved using “3D view+NeedleView+Vibration” with an average value of 14.7±8.5 s. Surprisingly, longer puncture results were achieved with the 3D view with an average time of 34.8±21.0 s.
In the expert group, planning time was minor when using “3D view+NeedleView+Audio” with an average value of 2.7±0.6 s.
Higher times were achieved using the Vibration (16.8±8.1 s). With respect for the puncture step, best results were achieved when using “3D view+NeedleView+Audio” with an average value of 15.2±7.7 s. Longer times were needed when using the Audio feedback with an average value of 29.1±8.0 s.
The time needed for planning and puncturing is very promising, existing no statistical differences when comparing experts to naive participants in most of guidance strategies.
Audio and vibration feedback seemed to precisely inform the user to keep the needle correctly aligned with the target.
All participants considered that audio or vibration feedback gives less confidence than the NeedleView, but suggested that can improve the PRA procedure. Although impractical in a real situation, results reveal that the target can be reached without spending any time looking on the screen.
The step function to control the audio loudness was easy to understand for whole users. Although one tested also an exponential function (Equation 5), this option was never preferred.
loudness=0.0326e0.0799·err Equation 5
The audio feedback combined with the NeedleView provided good directional feedback during all the procedures. This interface was the preferred and simplest one without needing any training before usage. In the presence of the NeedleView, the 3D view was frequently ignored and the audio or vibrotactile were mainly used as a warning system than a guiding one.
Table 5 shows the questionnaire for questions Q0, Q1, Q2 and Q3 (see experiments above).
The NeedleView followed by the “3D+NeedleView” and “3D+NeedleView+Audio” were the three most favorite methods according to Q0.
Whole users score Audio and Vibration in Q1 with 10, because they were blindfolded when performing the procedure. In contrast, visual modalities were scored with 1. Hybrid modalities that combines Visual and Audio/Vibration feedback were scored with similar values.
Regarding the question Q2, audio feedback, followed by the 3D screen, were the most difficult techniques. The easier ones were the NeedleView alone or combined with any other feedback.
Finally, Audio feedback was the most difficult technique with the highest learning curves. The NeedleView was the easiest and straightforward one, with no need for training.
The following pertains to animal trials. Eight surgeons have participated in this experiment. In order to not sacrifice many animals, surgeons start this experiment by choosing three guidance approaches. “NeedleView+3D view”, “Audio alone” and “NeedleView+3D view+Audio” were the selected ones. A surgical setup is represented in
Three pigs were used for this experiment. Six successful tracts to the middle of ureter were accomplished. Each ureter was punctured up to 4 times in different regions.
Due to animal's small anatomy, it was only possible to place the ureterenoscope into a kidney calyx in two pigs. Therefore, statistical analysis was only performed for the ureter experiments.
Results are in accordance with results already presented. 100% success rates were reached with only one attempt for almost all cases.
The time needed to see the needle tip in the ureter skin (
Results are in accordance with the ones described with the phantom test. Longer times were obtained when performing the experiment with audio alone. Statistically significant differences were found when comparing the audio with any of the other two interfaces for surgical planning (p<0.0001) and puncturing (p<0.001).
No significant differences were found when comparing the “NeedleView+3D view” and “NeedleView+3D view+Audio”. But as already described in the phantom test, that audio feedback helped the surgeon keeping the correct path with minor deviations (
No major problems were found during the experiments and the most time-consuming. One attempt was needed for all surgeons.
The following pertains to further discussion. Although the KidneyNav framework already allowed surgeons to comprehend the volumetric data of the collecting system and to follow a 3D path, now one explores the possibility of adding additional feedback, by recurring to the hearing and touch senses.
It was tested multiple interfaces where audio and vibrotactile senses were combined with visual information to provide and improve insight into complex PRA trajectories. These new ways of feedback intended not to replace visual guidance, but complement the surgeon interaction with the needle, being able to anticipate any movement even without looking to a monitor.
With this multi-sensorial interface, the surgeon must place the needle at the skin surface, align with respect to the target, and finally he should puncture the kidney according to this approved angle. When the needle is being inserted from an incorrect angle, it will miss the anatomic target which error is dependent on the angulation error. This error (originated at possible needle deflection, soft tissue displacements and human tremors) can be tracked in real-time using the 3D EMT sensors at the needle and catheter tips and used to generate a set of visual, audio or vibration signals.
The previously framework, based in a 3D view, requires some learning training. Results show that by using this view, is was not possible to precisely and easily follow a pre-computed trajectory with minimal deviations from the correct path. Regardless the user experience, he can easily misunderstood the 3D information, increasing the probability of high errors.
By using the NeedleView, the user was able to automatically and intuitively classify the error as safe and dangerous. No training was required within this interface, being a ready-to-use approach. Experimental results show that NeedleView worked reasonably well for all puncture orientations being the fastest and preferred guidance aspect.
The headphones produces different spatialized sound signals relating the position of the needle with the target. Different sound frequencies were tested to create accepted tones for all users. If the needle is corrected aligned no sound was generated, avoiding possible annoyance and distractions.
Other works have reported different angulation errors around the head using headphones: 22.3°, 26°, 34.2° [13] and 22.2°. When compared to these works, results show reduced angulation errors when using the SS4 sound strategy with 12 source sounds (75% of participants show errors below 20°). The poorest scenario was shown at SS1-16SC due to the inability to clearly distinguish from front and back sounds, as well sources separated from small angles.
Although it provided limited spatial information, 3D audio feedback was enough to accurately guide the surgeons without never looking to the screen (in phantom test and animal trial). By using different audio intensities and pitches, it was possible to improve the feedback about the amount and direction of the actual deviation from the preferred trajectory.
As already reported, error angles are higher for headphones than for loudspeakers. Although surrounding loudspeakers will have better sound spatial accuracy, is unpractical to implement inside an operating room, due to all the medical armamentarium. Moreover, since the listener head must be centered with the sound system, 7.1 headphones presented a preferred solution to deal with these problems.
When studying the effect of vibration for guidance, results show that the source motors can be correctly identified with percentages higher than 90% for an 8-site configuration. Our results are in accordance with a recently described work [14], that tests the efficacy of providing vibration feedback using a headband holding 12 coin-type motors.
Although vibration was easily to understand and learn than audio, often users prefer the audio feedback combined with the NeedleView plus Audio (Table 5).
When using only audio or vibration feedback alone, the average time needed for planning or puncturing was significantly superior when compared to the NeedlleView. But when combined, this feedback helped to follow the pre-planned trajectory with minor deviations (
The term “comprising” whenever used in this document is intended to indicate the presence of stated features, integers, steps, components, but not to preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Flow diagrams of particular embodiments of the presently disclosed methods are depicted in figures. The flow diagrams illustrate the functional information one of ordinary skill in the art requires to perform said methods required in accordance with the present disclosure.
It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the disclosure. Thus, unless otherwise stated the steps described are so unordered meaning that, when possible, the steps can be performed in any convenient or desirable order.
It is to be appreciated that certain embodiments of the disclosure as described herein may be incorporated as code (e.g., a software algorithm or program) residing in firmware and/or on computer useable medium having control logic for enabling execution on a computer system having a computer processor, such as any of the servers described herein. Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution. The code can be arranged as firmware or software, and can be organized as a set of modules, including the various modules and algorithms described herein, such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another to configure the machine in which it is executed to perform the associated functions, as described herein.
The disclosure should not be seen in any way restricted to the embodiments described and a person with ordinary skill in the art will foresee many possibilities to modifications thereof. The above described embodiments are combinable. The following claims further set out particular embodiments of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
110686 | Apr 2018 | PT | national |
This application is a Continuation of International Patent Application No. PCT/IB2019/053083, filed 15 Apr. 2019, which claims priority to Portuguese Patent Application No. 110686, filed 13 Apr. 2018, each of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6171303 | Ben-Haim et al. | Jan 2001 | B1 |
6395016 | Oron et al. | May 2002 | B1 |
6436095 | Ben-Haim et al. | Aug 2002 | B1 |
6447504 | Ben-Haim et al. | Sep 2002 | B1 |
6542766 | Hall et al. | Apr 2003 | B2 |
6574492 | Ben-Haim et al. | Jun 2003 | B1 |
6600948 | Ben-Haim et al. | Jun 2003 | B2 |
6843793 | Brock et al. | Jan 2005 | B2 |
6860878 | Brock | Mar 2005 | B2 |
6915149 | Ben-Haim et al. | Jun 2005 | B2 |
6949106 | Brock et al. | Sep 2005 | B2 |
7051738 | Oron et al. | May 2006 | B2 |
7090683 | Brock et al. | Aug 2006 | B2 |
7114500 | Bonutti | Oct 2006 | B2 |
7169141 | Brock et al. | Jan 2007 | B2 |
7214230 | Brock et al. | May 2007 | B2 |
7297142 | Brock | Nov 2007 | B2 |
7371210 | Brock et al. | May 2008 | B2 |
7604642 | Brock | Oct 2009 | B2 |
7653427 | Essner et al. | Jan 2010 | B2 |
7708741 | Bonutti | May 2010 | B1 |
7713190 | Brock et al. | May 2010 | B2 |
7744622 | Brock et al. | Jun 2010 | B2 |
7750311 | Daghighian et al. | Jul 2010 | B2 |
7758569 | Brock | Jul 2010 | B2 |
7775972 | Brock et al. | Aug 2010 | B2 |
7789875 | Brock et al. | Sep 2010 | B2 |
7835784 | Mire et al. | Nov 2010 | B2 |
7867241 | Brock et al. | Jan 2011 | B2 |
7905828 | Brock et al. | Mar 2011 | B2 |
7918861 | Brock et al. | Apr 2011 | B2 |
7931586 | Brock et al. | Apr 2011 | B2 |
8050743 | Daghighian et al. | Nov 2011 | B2 |
8068896 | Daghighian et al. | Nov 2011 | B2 |
8086298 | Whitmore, III et al. | Dec 2011 | B2 |
8114097 | Brock et al. | Feb 2012 | B2 |
8190238 | Moll et al. | May 2012 | B2 |
8204575 | Stetz et al. | Jun 2012 | B2 |
8257303 | Moll et al. | Sep 2012 | B2 |
8303576 | Brock | Nov 2012 | B2 |
8311611 | Csavoy et al. | Nov 2012 | B2 |
8401617 | Whitmore, III et al. | Mar 2013 | B2 |
8414472 | Hagelauer | Apr 2013 | B2 |
8414598 | Brock et al. | Apr 2013 | B2 |
8467851 | Mire et al. | Jun 2013 | B2 |
8498691 | Moll et al. | Jul 2013 | B2 |
8504139 | Jacobsen et al. | Aug 2013 | B2 |
8617102 | Moll et al. | Dec 2013 | B2 |
8623030 | Bonutti | Jan 2014 | B2 |
8641726 | Bonutt | Feb 2014 | B2 |
8688196 | Whitmore, III et al. | Apr 2014 | B2 |
8709034 | Keast et al. | Apr 2014 | B2 |
8801661 | Moll et al. | Aug 2014 | B2 |
8814874 | Hunter et al. | Aug 2014 | B2 |
8834490 | Bonutti | Sep 2014 | B2 |
8840629 | Bonutti | Sep 2014 | B2 |
8858557 | Bonutti | Oct 2014 | B2 |
8880158 | Spector | Nov 2014 | B2 |
8932316 | Keast et al. | Jan 2015 | B2 |
9033893 | Spector | Mar 2015 | B2 |
9060797 | Bonutti | Jun 2015 | B2 |
9078685 | Smith et al. | Jul 2015 | B2 |
9226688 | Jacobsen et al. | Jan 2016 | B2 |
9226800 | Burg et al. | Jan 2016 | B2 |
9232985 | Jacobsen et al. | Jan 2016 | B2 |
9254093 | Spector | Feb 2016 | B2 |
9259290 | Jenkins et al. | Feb 2016 | B2 |
9282908 | Spector | Mar 2016 | B2 |
9345532 | Laufer | Mar 2016 | B2 |
9345875 | Appenrodt et al. | May 2016 | B2 |
9421070 | Keast et al. | Aug 2016 | B2 |
9439581 | Dinsmoor et al. | Sep 2016 | B2 |
9439653 | Avneri et al. | Sep 2016 | B2 |
9439735 | Guttman et al. | Sep 2016 | B2 |
9456766 | Cox et al. | Oct 2016 | B2 |
9457168 | Moll et al. | Oct 2016 | B2 |
9480415 | Wald et al. | Nov 2016 | B2 |
9486229 | Laufer | Nov 2016 | B2 |
9492623 | Kapadia et al. | Nov 2016 | B2 |
9498584 | Kapadia et al. | Nov 2016 | B2 |
9498585 | Kapadia et al. | Nov 2016 | B2 |
9521961 | Silverstein et al. | Dec 2016 | B2 |
9681919 | Glossop | Jun 2017 | B2 |
9554716 | Burnside et al. | Jul 2017 | B2 |
9693699 | Spector et al. | Jul 2017 | B2 |
9706935 | Spector | Jul 2017 | B2 |
9737232 | Fan | Aug 2017 | B2 |
9782229 | Crawford et al. | Oct 2017 | B2 |
9861836 | Schwartz | Jan 2018 | B2 |
9993306 | Keast et al. | Jun 2018 | B2 |
20020087048 | Brock et al. | Jul 2002 | A1 |
20020087148 | Brock et al. | Jul 2002 | A1 |
20020095175 | Brock et al. | Jul 2002 | A1 |
20020120252 | Brock et al. | Aug 2002 | A1 |
20020128661 | Brock et al. | Sep 2002 | A1 |
20020128662 | Brock et al. | Sep 2002 | A1 |
20020138082 | Brock et al. | Sep 2002 | A1 |
20050288576 | Fegert et al. | Dec 2005 | A1 |
20060176242 | Jaramaz | Aug 2006 | A1 |
20070232896 | Gilboa et al. | Oct 2007 | A1 |
20080119872 | Brock et al. | May 2008 | A1 |
20080125793 | Brock et al. | May 2008 | A1 |
20080132913 | Brock et al. | Jun 2008 | A1 |
20080177285 | Brock et al. | Jul 2008 | A1 |
20090114039 | Schultze et al. | May 2009 | A1 |
20100010343 | Daghighian et al. | Jan 2010 | A1 |
20100094116 | Silverstein | Apr 2010 | A1 |
20100312094 | Guttman et al. | Dec 2010 | A1 |
20110237935 | Kalpin | Sep 2011 | A1 |
20110238083 | Moll et al. | Sep 2011 | A1 |
20120253200 | Stolka et al. | Oct 2012 | A1 |
20120289815 | Keast et al. | Nov 2012 | A1 |
20130016185 | Stolka et al. | Jan 2013 | A1 |
20130253599 | Gorek et al. | Sep 2013 | A1 |
20130281787 | Avneri et al. | Oct 2013 | A1 |
20140031674 | Newman et al. | Jan 2014 | A1 |
20140046261 | Newman et al. | Feb 2014 | A1 |
20140051985 | Fan et al. | Feb 2014 | A1 |
20140107475 | Cox et al. | Apr 2014 | A1 |
20140200639 | De La Rama | Jul 2014 | A1 |
20140309560 | Bonutti | Oct 2014 | A1 |
20140378999 | Crawford et al. | Dec 2014 | A1 |
20150032164 | Crawford et al. | Jan 2015 | A1 |
20150209119 | Theodore et al. | Jul 2015 | A1 |
20150297114 | Cox et al. | Oct 2015 | A1 |
20150305650 | Hunter et al. | Oct 2015 | A1 |
20150335386 | Smith et al. | Nov 2015 | A1 |
20160120609 | Jacobsen et al. | May 2016 | A1 |
20160213916 | De La Rama | Jul 2016 | A1 |
20160220320 | Crawford et al. | Aug 2016 | A1 |
20160235493 | LeBoeuf, II et al. | Aug 2016 | A1 |
20160242849 | Crawford et al. | Aug 2016 | A9 |
20160242855 | Fichtinger et al. | Aug 2016 | A1 |
20160256225 | Crawford et al. | Sep 2016 | A1 |
20160256230 | Kowshik et al. | Sep 2016 | A1 |
20160331479 | Crawford et al. | Nov 2016 | A1 |
20170007334 | Crawford et al. | Jan 2017 | A1 |
20170020561 | Cox et al. | Jan 2017 | A1 |
20170020630 | Johnson et al. | Jan 2017 | A1 |
20170042621 | Wald et al. | Feb 2017 | A1 |
20170049991 | Avneri et al. | Feb 2017 | A1 |
20170071691 | Crawford et al. | Mar 2017 | A1 |
20170079681 | Burnside et al. | Mar 2017 | A1 |
20170079727 | Crawford et al. | Mar 2017 | A1 |
20170119339 | Johnson et al. | May 2017 | A1 |
20170172669 | Berkowitz et al. | Jun 2017 | A1 |
20170196590 | Sperry et al. | Jul 2017 | A1 |
20170231702 | Crawford et al. | Aug 2017 | A1 |
20170239002 | Crawford et al. | Aug 2017 | A1 |
20170239003 | Crawford et al. | Aug 2017 | A1 |
20170239006 | Crawford et al. | Aug 2017 | A1 |
20170239007 | Crawford et al. | Aug 2017 | A1 |
20170245944 | Crawford et al. | Aug 2017 | A1 |
20170245951 | Crawford et al. | Aug 2017 | A1 |
20170252112 | Crawford et al. | Sep 2017 | A1 |
20170258533 | Crawford et al. | Sep 2017 | A1 |
20170265949 | Crawford et al. | Sep 2017 | A1 |
20170281145 | Crawford et al. | Oct 2017 | A1 |
20170304013 | Crawford et al. | Oct 2017 | A1 |
20170311838 | Fan | Nov 2017 | A1 |
20170325896 | Donhowe et al. | Nov 2017 | A1 |
20170348061 | Joshi et al. | Dec 2017 | A1 |
20170360517 | Crawford et al. | Dec 2017 | A1 |
20170367603 | Spector | Dec 2017 | A1 |
20180000546 | Crawford et al. | Jan 2018 | A1 |
20180056040 | Fenech et al. | Mar 2018 | A1 |
20180062071 | Bolognia et al. | Mar 2018 | A1 |
20180200015 | Ng et al. | Jul 2018 | A1 |
Number | Date | Country |
---|---|---|
2006202149 | Mar 2009 | AU |
2017272200 | Jan 2018 | AU |
2942656 | Nov 2011 | CA |
105796177 | Jul 2016 | CN |
108524003 | Sep 2018 | CN |
1224918 | Jul 2002 | EP |
1224919 | Jul 2002 | EP |
1382293 | Jan 2004 | EP |
1428472 | Jun 2004 | EP |
2085026 | Aug 2009 | EP |
2258335 | Dec 2010 | EP |
2380550 | Oct 2011 | EP |
2912999 | Sep 2015 | EP |
2913000 | Sep 2015 | EP |
2967411 | Jan 2016 | EP |
3249427 | Nov 2017 | EP |
3278758 | Feb 2018 | EP |
3295887 | Mar 2018 | EP |
3306567 | Apr 2018 | EP |
3308824 | Apr 2018 | EP |
3318213 | May 2018 | EP |
3320874 | May 2018 | EP |
3332706 | Jun 2018 | EP |
3369394 | Sep 2018 | EP |
2009-045454 | Mar 2009 | JP |
5380410 | Jan 2014 | JP |
2016-512457 | Apr 2016 | JP |
2016-104192 | Jun 2016 | JP |
2017-077479 | Apr 2017 | JP |
2017-223657 | Dec 2017 | JP |
2018-011938 | Jan 2018 | JP |
2018-047353 | Mar 2018 | JP |
2018-051306 | Apr 2018 | JP |
2018-079304 | May 2018 | JP |
2018-094404 | Jun 2018 | JP |
2018-108344 | Jul 2018 | JP |
2018-110841 | Jul 2018 | JP |
2018-143766 | Sep 2018 | JP |
WO 9724981 | Jul 1997 | WO |
WO 9724983 | Jul 1997 | WO |
WO 9725101 | Jul 1997 | WO |
WO 9729679 | Aug 1997 | WO |
WO 9804321 | Feb 1998 | WO |
WO 9830144 | Jul 1998 | WO |
WO 9904706 | Feb 1999 | WO |
WO 0016684 | Mar 2000 | WO |
WO 0067640 | Nov 2000 | WO |
WO 0069353 | Nov 2000 | WO |
WO 0215973 | Feb 2002 | WO |
WO 02051329 | Jul 2002 | WO |
WO 02062265 | Aug 2002 | WO |
WO 03103492 | Dec 2003 | WO |
WO 2006027599 | Mar 2006 | WO |
WO 2007005976 | Jan 2007 | WO |
WO 2007038135 | Apr 2007 | WO |
WO 2007121139 | Apr 2007 | WO |
WO 2007048515 | May 2007 | WO |
WO 2007136784 | Nov 2007 | WO |
WO 2008028149 | Mar 2008 | WO |
WO 2009020764 | Feb 2009 | WO |
WO 2010036725 | Apr 2010 | WO |
WO 2010104850 | Sep 2010 | WO |
WO 2010144402 | Dec 2010 | WO |
WO 2010144405 | Dec 2010 | WO |
WO 2010144419 | Dec 2010 | WO |
WO 2011063266 | May 2011 | WO |
WO 2011137301 | Nov 2011 | WO |
WO 2011150358 | Dec 2011 | WO |
WO 2011150376 | Dec 2011 | WO |
WO 2012088535 | Jun 2012 | WO |
WO 2012109760 | Aug 2012 | WO |
WO 2012158500 | Nov 2012 | WO |
WO 2013142386 | Sep 2013 | WO |
WO 2013192598 | Dec 2013 | WO |
WO 2014066383 | May 2014 | WO |
WO 2014066389 | May 2014 | WO |
WO 2014066397 | May 2014 | WO |
WO 2014113577 | Jul 2014 | WO |
WO 2014113612 | Jul 2014 | WO |
WO 2014116853 | Jul 2014 | WO |
WO 2014116961 | Jul 2014 | WO |
WO 201414338 | Sep 2014 | WO |
WO 2014149183 | Sep 2014 | WO |
WO 2014176072 | Oct 2014 | WO |
WO 2015061674 | Apr 2015 | WO |
WO 2016007717 | Jan 2016 | WO |
WO 2016077419 | May 2016 | WO |
WO 2016118744 | Jul 2016 | WO |
WO 2017120434 | Jul 2017 | WO |
WO 2018009841 | Jan 2018 | WO |
WO 2018055433 | Mar 2018 | WO |
Entry |
---|
International Preliminary Report on Patentability for International Application No. PCT/IB2019/053083, dated Oct. 22, 2020. |
Banovac, Filip et al., “Needle Biopsy of Anatomically Unfavorable Liver Lesions with an Electromagnetic Navigation Assist Device in a Computed Tomography Environment” Journal of vascular and interventional radiology, vol. 17, pp. 1671-1675, 2006. |
Bosnjak, Antonio et al. “An Electromagnetic Tracking System for Surgical Navigation with Registration of Fiducial Markers Using the Iterative Closest Point Algorithm” 10th IEEE International Conference on, Nov. 2010, pp. 1-5. |
Dobrzynski, Michal Karol “Quantifying Information Transfer Through a Head-Attached Vibrotactile Display: Principles for Design and Control” IEEE Transactions on Biomedical Engineering, vol. 59, No. 7, Jul. 2012. |
Fischer, Gregor S et al. “Electromagnetic Tracker Measurement Error Simulation and Tool Design” Medical Image Computing and Computer-Assisted Intervention—MICCAI 2005, ed: Springer, 2005, pp. 73-80. |
Gardner, William G. “3-D Audio Using Loudspeakers” Massachusetts Institute of Technology; Sep. 1997. |
Huber, Johannes et al.: Abstract for “Navigated Renal Access Using Electromagnetic Tracking: An Initial Experience” Surgical Endoscopy and Other Interventional Techniques, vol. 25, pp. 1307-1312, Apr. 2011. |
Krücker, Jochen et al. “An Electro-Magnetically Tracked Laparoscopic Ultrasound for Multi-Modality Minimally Invasive Surgery” International Congress Series 1281, May 2005, pp. 746-751. |
Levy, Elliot et al.: Abstract for “Electromagnetic Tracking-Guided Percutaneous Intrahepatic Portosystemic Shunt Creation in a Swine Model” Journal of Vascular and Interventional Radiology, vol. 18, pp. 303-307, Feb. 2007. |
Meyer, Bernhard C et al. “Electromagnetic Field-Based Navigation for Percutaneous Punctures on C-arm CT: Experimental Evaluation and Clinical Application” European Radiology; vol. 18: 2855-2864; Jul. 2008. |
Nagel, Markus et al. “Electromagnetic Tracking System for Minimal Invasive Interventions Using a C-arm System with CT Option: First Clinical Results” Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling, Pts 1 and 2, vol. 6918, pp. G9180-G9180 423, 2008. |
Nixon, Mark A. et al. “The Effects of Metals and Interfering Fields on Electromagnetic Trackers” Presence: Teleoperators and Virtual Environments, vol. 7, No. 2, pp. 204-218, Apr. 1998. |
NDI “Aurora Features” available at http://www.ndigital.com/medical/products/aurora/; printed Sep. 24, 2020. |
Wegner, Kristen “Surgical Navigation System and Method Using Audio Feedback” ICAD' 98' 1998. |
Yaniv, Ziv et al. “Electromagnetic Tracking in the Clinical Environment” Technical Report; CAIMR TR-2008-1; Jul. 2008. |
Wikipedia “Bombsight” retrieved from the Internet: https://en.wikipedia.org/w/index.php?title=Bombsight&oldid=826788111; Feb. 21, 2018. |
International Search Report for International Application No. PCT/IB2019/053083, dated Jul. 24, 2019. |
Written Opinion for International Application No. PCT/IB2019/053083, dated Jul. 24, 2019. |
Examination Report for corresponding Indian Patent Application No. 202017044026, dated Jul. 5, 2022. |
First Office Action (Including Translation) for corresponding Chinese Patent Application No. 201980024968.9, dated Dec. 9, 2023. |
Number | Date | Country | |
---|---|---|---|
20210007774 A1 | Jan 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB2019/053083 | Apr 2019 | US |
Child | 17032333 | US |