Insertion of catheters into blood vessels, veins, or arteries can be a difficult task for non-experts or in trauma applications because the vein or artery may be located deep within the body, may be difficult to access in a particular patient, or may be obscured by trauma in the surrounding region to the vessel. Multiple attempts at penetration may result in extreme discomfort to the patient, loss of valuable time during emergency situations, or in further trauma. Furthermore, central veins and arteries are often in close proximity to each other. While attempting to access the internal jugular vein, for example, the carotid artery may instead be punctured, resulting in severe complications or even mortality due to consequent blood loss due to the high pressure of the blood flowing in the artery. Associated nerve pathways may also be found in close proximity to a vessel, such as the femoral nerve located nearby the femoral artery, puncture of which may cause significant pain or loss of function for a patient.
To prevent complications during cannulation, ultrasonic instruments can be used to determine the location and direction of the vessel to be penetrated. One method for such ultrasound guided cannulation involves a human expert who manually interprets ultrasound imagery and inserts a needle. Such a manual procedure works well only for experts who perform the procedure regularly so that they may accurately cannulate a vessel.
Systems have been developed in an attempt to remove or mitigate the burden on the expert, such as robotic systems that use a robotic arm to insert a needle. These table-top systems and robotic arms are too large for portable use, such that they may not be implemented by medics at a point of injury. In addition, these systems are limited to peripheral venous access, and may not be used to cannulate more challenging vessels or veins.
Still other systems have been used to display an image overlay on the skin to indicate where a vessel may be located, or otherwise highlight where the peripheral vein is located just below the surface. However, in the same manner as above, these systems are limited to peripheral veins, and provide no depth information that may be used by a non-expert to guide cannulation, not to mention failures or challenges associated with improper registration.
Cricothyrotomy and tracheotomy are two surgical procedures that allow a patient's airway to be accessed through the neck when a patient cannot breathe and endotracheal intubation (through the mouth or nose) is not possible or applicable. A cricothyrotomy is an emergency procedure in which a breathing tube is inserted into the trachea through the cricothyroid membrane (between the thyroid cartilage and cricoid cartilage that are key anatomical landmarks). A tracheotomy is typically performed in a hospital operating room (OR) or intensive care unit (ICU) by inserting a breathing tube into the trachea below the cricoid cartilage. Cricothyrotomy is a temporizing measure. Patients who undergo cricothyrotomy typically need to be converted to tracheotomy to avoid long-term complications.
Overall, about 100,000 tracheotomies are normally performed in the U.S. each year, although the number has increased during the COVID-19 pandemic. Emergency cricothyrotomy is not performed often but is a critical, lifesaving procedure. It currently suffers from high failure rate due to the fact that it requires repeated training to gain and maintain experience.
Several commercial products assist a user in performing a cricothyrotomy, such as the QuickTrach2. These products are intended to simplify inserting the breathing tube. However, these devices do not address the primary causes of failed cricothyrotomies, which are that the breathing tube is incorrectly inserted outside of the trachea, either above it or to the side.
Many of these procedures could benefit from enhanced guidance. Therefore, there is a need for techniques for improved cannulation of airway passages that is less cumbersome, more accurate, and able to be deployed by a non-expert.
The present disclosure addresses the aforementioned drawbacks by providing new systems and methods for guided airway cannulation. The systems and methods provide for image analysis to provide for segmentation of airway passages of interest from image data. The image analysis provides guidance for insertion of a cannulation system into a subject and may be accomplished by a non-expert based upon the guidance provided. The guidance may include an indicator or a mechanical guide to guide a user when inserting the cannulation system into a subject to penetrate the airway of interest.
In one configuration, a system is provided for guiding an interventional device in an interventional procedure of a subject. The system includes an ultrasound probe and a guide system coupled to the ultrasound probe and configured to guide the interventional device into a field of view (FOV) of the ultrasound probe. The system also includes a non-transitory memory having instructions stored thereon. The system also includes a processor configured to access the non-transitory memory and execute the instructions. The processor is configured to access image data acquired from the subject using the ultrasound probe; the image data include at least one image of an anatomical landmark structure of the subject. The processor is also configured to determine, from the image data and the anatomical landmark structure, a location of a target airway within the subject. The processor is also configured to determine an insertion point location for the interventional device based upon the location of the target airway and guide placement of the ultrasound probe to position the guide system at the insertion point location and track the interventional device from the insertion point location to the target structure.
The foregoing and other aspects and advantages of the present disclosure will appear from the following description. In the description, reference is made to the accompanying drawings that form a part hereof, and in which there is shown by way of illustration a preferred embodiment. This embodiment does not necessarily represent the full scope of the invention, however, and reference is therefore made to the claims and herein for interpreting the scope of the invention. Like reference numerals will be used to refer to like parts from Figure to Figure in the following description.
Systems and methods are provided for guided airway cannulation. The systems and methods provide for image analysis and machine learning to provide for segmentation of airway passages of interest from image data and for guidance of a cannulation procedure. The image analysis provides guidance for insertion of a cannulation system into a subject and may be accomplished by a non-expert based upon the guidance provided. The guidance may include an indicator or a mechanical guide to guide a user when inserting the cannulation system into a subject to penetrate the airway of interest
A machine learning, or artificial intelligence (AI) guided airway cannulation system or “AI-GUIDE-Airway” may be used is to assist medical providers in performing surgical airway procedures more efficiently, more accurately, more safely, and with less exposure to potentially contagious aerosols. Surgical airway procedures may include cricothyrotomy, tracheotomy and the like. In the case of efficiency, a single provider may be enabled to perform a tracheotomy using the systems and methods of the present disclosure instead of the three or more providers that are needed for conventional procedures. In the case of accuracy and safety, one-third of emergency cricothyrotomies fail due to improper placement and inability to locate and access the airway. An AI-GUIDE-Airway procedure may significantly reduce that error.
For tracheotomy, percutaneous procedures, which start with a needle insertion through the skin, have become more common compared to the traditional more complex open procedure, but are not appropriate for all patients, such as those with morbid obesity, challenging neck anatomy, who are on blood thinners or who need an emergency airway. These cases represent perhaps 20% of the patients that need a tracheotomy. The increased accuracy and safety provided by a guided airway cannulation procedure may improve patient outcomes and to allow more of these patients to more promptly receive a tracheotomy in the ICU rather than the OR, which also reduces cost. With automated neck ultrasound interpretation and guidance, a guided cannulation in accordance with the present disclosure may bridge the training and experience gap.
Additionally, surgical airway access generates aerosols. In the setting of emerging viral illnesses such as COVID-19, potential aerosol exposure to hospital personnel can result in altered treatment patterns, e.g., reduced number of procedures, to protect providers. A guided airway cannulation in accordance with the present disclosure may operate under an integrated protective barrier, which significantly reduces an operator's exposure to aerosols.
In some configurations, successful airway access may be provided by a machine learning or AI method that identifies key neck landmarks, such as thyroid cartilage, cricothyroid membrane (CTM), cricoid cartilage, thyroid gland, tracheal rings, and the like. Proper tube insertion location may be determined using automated image interpretation from neck ultrasound images acquired by an ultrasound system, such as a commercial ultrasound system. In some configurations, a handheld robotic module, integrated with a commercial ultrasound probe, may be used to perform a series of steps to insert a breathing tube. A machine learning or AI system, software, and/or embedded hardware sensor, may be used to confirm proper tube placement. Automated neck ultrasound sensing and interpretation may be used in procedures such as cricothyrotomy and tracheotomy, which are currently performed using manual palpation and manual incisions.
When energized by a transmitter 106, a given transducer element 104 produces a burst of ultrasonic energy. The ultrasonic energy reflected back to the transducer array 102 (e.g., an echo) from the object or subject under study is converted to an electrical signal (e.g., an echo signal) by each transducer element 104 and can be applied separately to a receiver 108 through a set of switches 110. The transmitter 106, receiver 108, and switches 110 are operated under the control of a controller 112, which may include one or more processors. As one example, the controller 112 can include a computer system.
The transmitter 106 can be programmed to transmit unfocused or focused ultrasound waves. In some configurations, the transmitter 106 can also be programmed to transmit diverged waves, spherical waves, cylindrical waves, plane waves, or combinations thereof. Furthermore, the transmitter 106 can be programmed to transmit spatially or temporally encoded pulses.
The receiver 108 can be programmed to implement a suitable detection sequence for the imaging task at hand. In some embodiments, the detection sequence can include one or more of line-by-line scanning, compounding plane wave imaging, synthetic aperture imaging, and compounding diverging beam imaging.
In some configurations, the transmitter 106 and the receiver 108 can be programmed to implement a high frame rate. For instance, a frame rate associated with an acquisition pulse repetition frequency (“PRF”) of at least 100 Hz can be implemented. In some configurations, the ultrasound system 100 can sample and store at least one hundred ensembles of echo signals in the temporal direction.
The controller 112 can be programmed to implement an imaging sequence using the techniques described in the present disclosure, or as otherwise known in the art. In some embodiments, the controller 112 receives user inputs defining various factors used in the design of the imaging sequence.
A scan can be performed by setting the switches 110 to their transmit position, thereby directing the transmitter 106 to be turned on momentarily to energize transducer elements 104 during a single transmission event according to the implemented imaging sequence. The switches 110 can then be set to their receive position and the subsequent echo signals produced by the transducer elements 104 in response to one or more detected echoes are measured and applied to the receiver 108. The separate echo signals from the transducer elements 104 can be combined in the receiver 108 to produce a single echo signal.
The echo signals are communicated to a processing unit 114, which may be implemented by a hardware processor and memory, to process echo signals or images generated from echo signals. As an example, the processing unit 114 can guide cannulation of a vessel of interest using the methods described in the present disclosure. Images produced from the echo signals by the processing unit 114 can be displayed on a display system 116.
In some configurations, a non-limiting example method may be deployed on an imaging system, such as a commercially available imaging system, to provide for a portable ultrasound system with airway cannulation guidance. The systems and methods may locate an airway passage, and may provide real-time guidance to the user to position the ultrasound probe and airway cannulation device to the optimal insertion point. The systems may determine a rotational angle for the ultrasound probe with respect to the subject. The probe may include one or more of a fixed needle guide device, an adjustable mechanical needle guide, a displayed-image needle guide, and the like. An adjustable guide may include adjustable angle and/or depth. The system may guide or communicate placement or adjustments for the guide for the interventional device, such as a needle. For example, a processor of the system disclosed may determine an angle for the interventional device from an insertion point location to a target airway. The system may also determine or regulate the needle insertion distance from the insertion point location to the target airway based upon the depth computed for the anatomical landmark structure. The user may then insert a needle or cannula through the mechanical guide attached to the probe or displayed guide projected from the probe in order to ensure proper insertion. During insertion, the system may proceed to track the target airway and the penetration device until the airway is penetrated while providing real-time feedback to a user based on tracking the penetration device. A graphical user interface may be used to allow the medic to specify the desired airway and to provide feedback to the medic throughout the process.
For the purposes of this disclosure and accompanying claims, the term “real-time” or related terms are used to refer to and defined a real-time performance of a system, which is understood as performance that is subject to operational deadlines from a given event to a system's response to that event. For example, a real-time extraction of data and/or displaying of such data based on acquired ultrasound data may be one triggered and/or executed simultaneously with and without interruption of a signal-acquisition procedure.
In some configurations, the system may automate all ultrasound image interpretation and insertion computations, while a medic or a user may implement steps that require dexterity, such as moving the probe and inserting the cannula. Division of labor in this manner may avoid using a dexterous robot arm and may result in a small system that incorporates any needed medical expertise.
Referring to
Non-limiting example applications may include aiding a medic in performing additional emergency needle insertion procedures, such as needle decompression for tension pneumothorax (collapsed lung) and needle cricothyrotomy (to provide airway access). Portable ultrasound may be used to detect tension pneumothorax and needle insertion point (in an intercostal space, between ribs) or to detect the CTM and needle insertion point.
Anatomical landmarks 220, such as neck landmarks, may be identified along with a proper insertion location. In some configurations, a user may scan an airway identification and cannulation device along a supine patient's neck, starting from just below the chin and moving toward the collarbone. During the course of the scan, images may be processed, such as with a machine learning or AI routine, to automatically recognize the thyroid cartilage, then pass over the small, often less than 1 cm wide CTM, followed by the cricoid cartilage. The thyroid gland that lies to either side of the trachea below the cricoid cartilage may be recognized as a landmark to avoid, as may be significant blood vessels such as the anterior jugular vein. Tracheal rings may be identified as landmarks.
Based on the ultrasound images collected during the scan, a user may be guided via a display with directional arrows back to a proper insertion point for either a cricothyrotomy or tracheotomy. In some configurations, the guidance may be individualized to a particular patient's anatomy. The guidance may be configured to overcome challenges in patient variation, such as the variability in neck anatomy ranging from a long neck and prominent thyroid cartilage, to short muscular necks; the variability in ultrasound images of the trachea, which are air-filled but may contain significant fluid in an injured patient; the difficulty in keeping the ultrasound probe centered on the trachea due to protruding cartilage; the difficulty in detecting the small CTM for cricothyrotomy insertion, and the need to avoid the thyroid gland and critical blood vessels.
In some configurations, a handheld robotic module may be configured to take up less space, which may be exploited to fit to the limited space under the chin. Mechanical neck guides may be configured to fit on varying neck sizes in order to guide the ultrasound scan and to stabilize the neck and trachea while an intubation tube is inserted. A handheld robotic module may be used to perform a sophisticated sequence to insert the breathing tube, starting by inserting a needle and incising the skin, followed by a dilation sheath that is inserted along the needle shaft to create an opening sufficiently large for the breathing tube. The dilator is then retracted, leaving in place a track over which the breathing tube is inserted. In some configurations, the dilator may be configured as the breathing tube. A handheld robotic module may allow for one-person operation, in contrast to the three or more medical care providers that are needed to perform a tracheotomy currently.
Referring to
Using the imaging data and the identified anatomical landmarks, an airway of interest may be determined at step 320. In a non-limiting example, a processor is configured to assess the plurality of images of the anatomical landmark structure and the plurality of views of the target airway to identify a location on the subject. The location may be determined by segmenting the airway of interest in the imaging data or in using anatomical landmarks for localizing the airway. An insertion point may then be determined at step 330 for an airway cannulation system. Determining the insertion point may be based upon the determined location for the airway of interest and calculating a depth and a pathway for the cannulation system from the surface of a subject to the airway of interest without the cannulation system penetrating other critical structures or organs of interest, such as a nerve.
The insertion point may be determined for a user at step 340. The insertion point may be identified by illuminating a portion of the surface of a subject, or by adjusting a mechanical guide to the appropriate settings for the user, and the like. Depth of the penetration may also be controlled by a setting or a height of the mechanical guide. The airway cannulation system may be guided to the airway of interest for penetration at step 350. Guiding the cannulation system may include acquiring images of the airway of interest and the anatomical landmarks as the cannulation system is inserted into the subject and displaying the tracked images for the user.
A machine learning or AI system may be used to confirm successful insertion of a needle, or cannula, or dilator. Successful insertion may be determined may assessing breathing tube placement using CO2 sensing and/or ultrasound imaging. If CO2 is detected, then successful penetration of the airway may be confirmed. For ultrasound imaging, the machine learning or AI system may segment an airway passage wall to determine if the inserted needle, cannula, or dilator has penetrated the airway passage wall, and thereby confirm successful insertion into the airway.
In some configurations, the system can be integrated with a disposable, negative pressure barrier. Non-limiting example negative pressure barriers include a polymer barrier, such as a plastic material, a filter barrier, a HEPA barrier, and the like to isolate the sterile device, such as the sterile tracheotomy device, from the medical personnel operating the system. A negative pressure barrier may prevent spread of aerosolized blood during a surgical procedure by pulling aerosolized particles out of the air or region around the subject. This may free up one hand of the user for manipulation of the endotracheal tube that is in place before tracheotomy placement, adjusting the ventilator circuit, and other tasks, which usually require assistance from additional personnel.
Early tracheotomy, performed less than two weeks after beginning mechanical ventilation, has been recognized as a mechanism to decrease ICU length of stay, improve 90-day mortality, shorten ventilator requirement time, and decrease overall hospital costs in patients requiring prolonged mechanical ventilation. An automated airway penetration system has the potential to be widely used to perform and expand the indications for percutaneous tracheotomies in hospital, as a result of improved efficiency and safety, cost saving, improved patient outcomes, and broader indications for use.
Referring to
An insertion point may then be determined at step 418 for a needle, cannula, or dilator. Determining the insertion point may be based upon the determined and location for the airway of interest and the anatomical landmarks or landmarks to avoid. In some configurations, the method includes calculating a depth and a pathway from the skin surface of a subject to the airway of interest without the needle, cannula, or dilator penetrating device penetrating other organs or structures of interest along the pathway, such as a nerve or landmark to avoid. The insertion point may also be identified for a user at step 418. As above, the insertion point may be identified by illuminating a portion of the surface of a subject, by ensuring a fixed penetration guide is placed over the insertion point, or by automatically adjusting an adjustable mechanical guide to the appropriate settings for the user, and the like. Depth of the penetration may also be controlled by an adjusted setting for the adjustable mechanical guide, or a fixed height of the fixed guide. The needle, cannula or dilator may be tracked and guided to the airway of interest for penetration at step 420. Guiding the device may include acquiring ultrasound images of the airway of interest and the device as the device is inserted into the subject and displaying the tracked images for the user.
Any ultrasound probe may be used in accordance with the present disclosure, including 1D, 2D, linear, phased array, and the like. In some configurations, an image is displayed for a user of the airway of interest with any tracking information for the penetrating device overlaid on the image. In some configurations, no image is displayed for a user and instead only the insertion point may be identified by illuminating a portion of the surface of a subject. In some configurations, no image is displayed and the user is only informed of the probe reaching the proper location whereby a mechanical guide is automatically adjusted to the appropriate settings, such as angle and/or depth to target an airway of interest. The user may be informed of the probe reaching the proper location by any appropriate means, such as light indicator, a vibration of the probe, and the like.
In some configurations, identification of placement of the ultrasound transducer at a target location may be performed automatically by the system at step 410. Image data may be used for identifying anatomical landmarks, such as those described above, and may be accessed by the system to provide automatic identification for where the ultrasound transducer has been placed. In some configurations, a user may specify the airway of interest to be targeted. In a non-limiting example combination of the configurations, the location of the ultrasound transducer on the subject may be automatically determined along with the anatomy being imaged, with the user specifying the airway of interest to target in the automatically identified anatomy. A minimum of user input may be used in order to mitigate the time burden on a user.
Locating the airway of interest at step 416 may be based on machine learning of morphological and spatial information in the ultrasound images. In some configurations, a neural network may be deployed for machine learning and may learn features at multiple spatial and temporal scales. Airways of interest may be distinguished based on shape and/or appearance of the airway, shape and/or appearance of surrounding tissues, relative locations of the anatomical landmarks, and the like. Real-time airway identification may be enabled by a temporally trained routine without a need for conventional post-hoc processing.
Temporal information may be used with locating the airway of interest at step 440. Airway appearances and shape may change with movement of the anatomy over time, such as changes with heartbeat, or differences in appearance between hypotensive and normal-tensile situations. Machine learning routines may be trained with data from multiple time periods with differences in anatomy being reflected over the different periods of time. With a temporally trained machine learning routine, airway identification may be performed in a robust manner over time for a subject without misclassification and without a need to find a specific time frame or a specific probe position to identify vessels of interest.
In some configurations, to prevent any potential misclassifications conflicting information checks may be included in the system. A conflicting information check may include taking into consideration the general configuration of the anatomy at the location of the probe.
Identifying an insertion point for a user at step 418 may also include where the system automatically takes into account the orientation of the probe on a body. A conventional ultrasound probe includes markings on the probe to indicate the right vs left side of probe, which allows a user to orient a probe such that the mark is on the right of the patient, for example. The probe orientation may also be determined from an analysis of the acquired ultrasound images, or monitoring of the orientation of the markings, such as by an external camera. In some configurations, the penetration guide attachment may be configured to fit into the markings on the probe to ensure that the device is consistent with the orientation of the probe.
A safety check may also be performed as part of determining an insertion point at step 418. A safety check may include confirming that there are no critical structures, such as a bone, an unintended blood vessel, a non-target organ, a nerve, and the like, intervening on the path to penetrate the airway. The safety check may also include forcing the system to change the location of the penetration to avoid penetrating such critical structures or landmarks to avoid. In some configurations, the safety check may include confirming the needle has penetrated the airway of interest by the tracking and guidance at step 420, such as by detecting if CO2 is present after penetration. The safety check may also include determining that the user is holding the system in a stable position, by verifying from the ultrasound image or from an inertial measurement unit on the handle of the system.
Referring to
In some configurations, the method includes guiding a user in placement of the ultrasound probe on the subject. A target for penetration may be identified, such as by machine learning in accordance with the present disclosure, and localized. A user may then be guided in which direction to move the ultrasound probe for placement over an identified target. Once the ultrasound probe has reached the target location, a signal may indicate for the user to stop moving the probe. Guidance may be provided by the signal, such as the light on the probe, in a non-limiting example. Needle placement and penetration may proceed after the location of the target has been reached.
Referring to
Referring to
Additionally or alternatively, in some embodiments, the computing device 550 can communicate information about data received from the image source 502 to a server 552 over a communication network 554, which can execute at least a portion of the airway of interest image processing system 504 to generate images of an airway of interest, or otherwise segment an airway of interest from data received from the image source 502. In such embodiments, the server 552 can return information to the computing device 550 (and/or any other suitable computing device) indicative of an output of the airway of interest image processing system 504 to generate images of an airway of interest, or otherwise segment an airway of interest from data received from the image source 502 that may include use of anatomical landmarks.
In some embodiments, computing device 550 and/or server 552 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on. The computing device 550 and/or server 552 can also reconstruct images from the data.
In some embodiments, image source 502 can be any suitable source of image data (e.g., measurement data, images reconstructed from measurement data), such as an ultrasound system, another computing device (e.g., a server storing image data), and so on. In some embodiments, image source 502 can be local to computing device 550. For example, image source 502 can be incorporated with computing device 550 (e.g., computing device 550 can be configured as part of a device for capturing, scanning, and/or storing images). As another example, image source 502 can be connected to computing device 550 by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, image source 502 can be located locally and/or remotely from computing device 550, and can communicate data to computing device 550 (and/or server 552) via a communication network (e.g., communication network 554).
In some embodiments, communication network 554 can be any suitable communication network or combination of communication networks. For example, communication network 554 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on. In some embodiments, communication network 108 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown in
Referring now to
In some embodiments, communications systems 608 can include any suitable hardware, firmware, and/or software for communicating information over communication network 554 and/or any other suitable communication networks. For example, communications systems 608 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 608 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
In some embodiments, memory 610 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 602 to present content using display 604, to communicate with server 552 via communications system(s) 608, and so on. Memory 610 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 610 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 610 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 550. In such embodiments, processor 602 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 552, transmit information to server 552, and so on.
In some embodiments, server 552 can include a processor 612, a display 614, one or more inputs 616, one or more communications systems 618, and/or memory 620. In some embodiments, processor 612 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display 614 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 616 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
In some embodiments, communications systems 618 can include any suitable hardware, firmware, and/or software for communicating information over communication network 554 and/or any other suitable communication networks. For example, communications systems 618 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 618 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
In some embodiments, memory 620 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 612 to present content using display 614, to communicate with one or more computing devices 550, and so on. Memory 620 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 620 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 620 can have encoded thereon a server program for controlling operation of server 552. In such embodiments, processor 612 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
In some embodiments, image source 502 can include a processor 622, one or more image acquisition systems 624, one or more communications systems 626, and/or memory 628. In some embodiments, processor 622 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, the one or more image acquisition systems 624 are generally configured to acquire data, images, or both, and can include an ultrasound system. Additionally or alternatively, in some embodiments, one or more image acquisition systems 624 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an ultrasound system or a subsystem of an ultrasound system. In some embodiments, one or more portions of the one or more image acquisition systems 624 can be removable and/or replaceable.
Note that, although not shown, image source 502 can include any suitable inputs and/or outputs. For example, image source 502 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on. As another example, image source 502 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
In some embodiments, communications systems 626 can include any suitable hardware, firmware, and/or software for communicating information to computing device 550 (and, in some embodiments, over communication network 554 and/or any other suitable communication networks). For example, communications systems 626 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 626 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
In some embodiments, memory 628 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 622 to control the one or more image acquisition systems 624, and/or receive data from the one or more image acquisition systems 624; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 550; and so on. Memory 628 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 628 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 628 can have encoded thereon, or otherwise stored therein, a program for controlling operation of image source 502. In such embodiments, processor 622 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., random access memory (“RAM”), flash memory, electrically programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
Referring to
In some configurations, the machine learning or AI system may be configured to determine the anatomical landmarks, identify a location for an airway of interest, and may provide automated guidance for penetrating the airway of interest. The machine learning or AI system may be trained using annotated anatomical images to establish a training for the anatomical landmarks that may be used to determine the location of an airway of interest, and to guide penetration of the airway without impinging upon critical structures to avoid.
In a non-limiting example, a pretrained RESNET AI model 660 may be pretrained using any one of, but not limited to, ImageNet, images from public ultrasound databases, and a custom ultrasound database finetuned for neck ultrasound data.
In a non-limiting example, a YOLO (You-only-look-once) bounding box detection AI model may utilize bounding box detection to the anatomical images for precise localization of a needle insertion point.
Referring to
Referring to
Referring to
Referring to
Referring to
Alternatively, an optical stabilizer 790 may be utilized as shown in
Referring to
Referring to
Referring to
Referring to
The sequence shown in
Referring to
Referring to
Referring to
The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.
This application is based on, claims priority to, and incorporates herein by reference U.S. Provisional Application Ser. No. 63/357,911, filed Jul. 1, 2022.
This invention was made with government support under FA8702-15-D-0001 awarded by the U.S. Army and Defense Health Agency. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
20240130798 A1 | Apr 2024 | US |
Number | Date | Country | |
---|---|---|---|
63357911 | Jul 2022 | US |