Current state of the art in neuromonitoring and neurocritical care typically relies on transcranial ultrasound which requires a high-end ultrasound scanner or a dedicated transcranial doppler system. Such devices are not easy to use and require an operator who has been specially trained on how to place the probe and identify the right location. Identifying such a location typically involves human observation of ultrasound images to determine a current probing location. This can be difficult due to the subtlety of features in ultrasound images, which can be easy to lose with the naked eye. Furthermore, a full three-dimensional search space is relatively large compared to a typical region of interest, which could result in an unforeseen length of time spent searching for the right location. Similarly, magnetic resonance (MR) techniques are not practical for ease-of-use point-of-care applications, especially for rapid screening in the field or continuous monitoring in hospitals, and the associated costs prohibit them from being accessible in many hospital settings.
The inventors have recognized the above shortcomings in the current state of the art and have developed novel techniques and devices to address such deficiencies. In particular, the inventors have developed an Artificial-Intelligence (AI)-assisted ultrasound sensing technique capable of autonomously steering ultrasound beams in the brain in two and three-dimensions.
In some embodiments, the beam-steering may be used to scan and interrogate various regions in the cranium, and assisted with AI, may be used to identify a region of interest, lock onto the region of interest, and conduct measurements, while correcting for movements and drifts from the target.
The beam-steering techniques may be implemented in an acoustic device and used to sense, detect, diagnose, and monitor brain functions and conditions including but not limited to detection of epileptic seizure, intracranial pressure, vasospasm, traumatic brain injury, stroke, mass lesions, and hemorrhage. Acoustic or sound in a broad sense may refer to any physical process that involves propagation of mechanical waves, including acoustic, sound, ultrasound, and elastic waves.
In some embodiments, the beam-steering techniques may utilize sound waves in passive or active form, measuring signatures such as reflection, scattering, transmission, attenuation, modulation, etc. of sound waves at one probe or multiple probes to process information and train itself for improved performance over time.
In some aspects, the inventors have developed a method comprising forming a beam in a direction relative to a brain of a person, the direction being determined by a machine learning model trained on data from prior signals detected from a brain of one or more persons. In some embodiments, after forming the beam, the method comprises detecting a signal from a region of interest of the brain of the person.
In some aspects, the inventors have developed a device wearable by or attached to or implanted within a person, comprising a transducer configured to form a beam in a direction relative to a brain of a person, the direction being determined using a machine learning model trained on data from prior signals detected from a brain of one or more persons. In some embodiments, the device comprises a processor configured to process the signal detected from a region of interest of the brain of the person.
In some aspects, the inventors have developed a method of making a device wearable by or attached to or implanted within a person, comprising providing a transducer configured to form a beam in a direction relative to a brain of a person, the direction being determined using a machine learning model trained on data from prior signals detected from a brain of one or more persons. In some embodiments, the method comprises providing a processor configured to process a signal detected from a region of interest of the brain of the person.
In some aspects, the inventors have developed a method comprising receiving a signal detected from a brain of a person. In some embodiments, the method comprises providing data from the detected signal as input to a machine learning model to obtain an output indicating an existence, location, and/or segmentation of an anatomical structure in the brain.
In some aspects, the inventors have developed a device wearable by or attached to or implanted within a person, comprising a transducer configured to detect a signal from a brain of a person. In some embodiments, the device comprises a processor configured to provide data from the detected signal as input to a machine learning model to a machine learning model to obtain output indicating an existence, location, and/or segmentation of an anatomical structure in the brain.
In some aspects, the inventors have developed a method of making a device wearable by or attached to or implanted within a person, comprising providing a transducer configured to detect a signal from a brain of a person. In some embodiments, the method comprises providing a processor configured to provide data from the detected signal as input to a machine learning model to obtain output indicating an existence, location, and/or segmentation of an anatomical structure in the brain.
In some aspects, the inventors have developed a method, comprising receiving a first signal detected from a brain of a person. In some embodiments, the method comprises determining a position of a region of interest of the brain of the person based on data from the first signal and an estimate position of the region of interest of the brain.
In some aspects, the inventors have developed a device wearable by or attached to or implanted within a person, comprising a transducer configured to detect a first signal from a brain of a person. In some embodiments, the device comprises a processor configured to determine a position of a region of interest of the brain of the person based on data from the first signal and an estimate position of the region of interest of the brain.
In some aspects, the inventors have developed a method of making a device wearable by or attached to or implanted within a person, comprising providing a transducer configured to detect a first signal from a brain of a person. In some embodiments, the method comprises providing a processor configured to determine a position of a region of interest of the brain of the person based on data from the first signal and an estimate position of the region of interest of the brain.
In some aspects, the inventors have developed a method, comprising estimating a shift associated with a signal detected from a brain of a person, wherein the shift is indicative of a change in position from which the signal was detected with respect to a position of a region of interest of the brain of the person.
In some aspects, the inventors have developed a device wearable by or attached to or implanted within a person, comprising a processor configured to estimate a shift associated with a signal detected from a brain of a person, wherein the shift is indicative of a change in position from which the signal was detected with respect to a position of a region of interest of the brain of the person.
In some aspects, the inventors have developed a method of making a device wearable by or attached to or implanted within a person, comprising providing a processor configured to estimate a shift associated with a signal detected from a brain of a person, wherein the shift is indicative of a change in position from which the signal was detected with respect to a position of a region of interest of the brain of the person.
In some aspects, the inventors have developed a device for monitoring and/or treating a brain of a person, comprising a transducer comprising a plurality of transducer elements, wherein at least some of the plurality of transducer elements are configured to generate an ultrasound beam to probe a region of the brain.
In some aspects, the inventors have developed a method for monitoring and/or treating a brain of a person, comprising using at least some of a plurality of transducer elements to generate an ultrasound beam to probe a region of the brain.
Various aspects and embodiments will be described with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:
The current state of the art in neuromonitoring and neurocritical care relies on ultrasound devices that require a trained operator for correctly placing a probe and identifying the region that is to be monitored or measured. As a result, the techniques are limited to monitoring only those regions that can be easily identified through human observation of ultrasound images. This can be limiting, since the brain includes many small and complex regions that can seem indistinguishable through simple observation of such images. Monitoring and measuring features in those regions may provide key insights that can be used as a basis for making diagnoses of, determining the severity of, or treating certain neurological conditions. However, without the ability to identify or locate such regions, the conventional techniques are limited in this respect.
Accordingly, in some aspects, the inventors have developed techniques for detecting a signal from a region of interest of a brain of a person. The techniques include using a transducer to detect the signal from the region of interest by forming a beam in a direction relative to the brain of the person, where the direction is determined by a machine learning model trained on prior signals detected from the brain of one or more persons. For example, the transducer can be an acoustic/ultrasound transducer (e.g., a device that converts electrical to mechanical energy and vice versa). The transducer can be a piezoelectric transducer, a capacitive micromachined ultrasonic transducer, a piezoelectric micromachined ultrasonic transducer, and/or another suitable transducer, as aspects of the technology described herein are not limited in this respect. The detected signal can be the result of a signal applied to the brain. For example, the transducer may detect a signal that has been applied to brain and reflected, scattered, and/or modulated in an acoustic frequency range, after interacting with the brain. The detected signal can be a passive signal generated by the brain. The region of interest can include any region of the brain of any size.
Identifying a region of interest in the brain can be challenging due to the large search volume of the brain. Conventional techniques include probing different regions of the brain at random, while observing ultrasound images. This can include detecting signals from a small region of the brain, observing an image that results from the signal to determine whether it includes the region of interest, and repeating this process until the region of interest appears in an image. As described above, this trial-and-error process can be time-consuming and challenging due to the subtlety of ultrasound images.
Accordingly, in some aspects, the inventors have developed techniques for initially guiding a beam towards a region of interest. In some aspects, the techniques include receiving a first signal from a brain of a person, and determining a position of the region of interest based on an estimate position of the region of interest and data from the first signal. The techniques can further include transmitting an instruction to the transducer to detect a second signal from the region of interest of the brain based on the determined position. For example, the first signal can be detected from a region of the brain that is different than the region of interest or that includes the region of interest. The first signal can be detected after a transducer forms a first beam or first set of beams (e.g., over a plane, a sequence of planes, and/or over a volume.) In some aspects, the direction for forming the first beam can be random, determined by prior knowledge, or output by a machine learning model. In some aspects, the estimate position may be estimated based on prior knowledge and/or estimated using machine learning techniques, as aspects of the technology described herein are not limited in this respect.
In some aspects, once a device is configured to detect signals from a region of the brain to that includes the region of interest, identifying the region of interest can further include detecting, localizing, and/or segmenting the region of interest. For example, detecting the region of interest can include determining whether the region of interest exists in the brain, which may help to inform a diagnosis of a neurological condition. Localizing the region of interest can include identifying the position of the region of interest with respect to the scanned plane, is sequence of planes, or volume. Such information can help to inform future acquisitions for detecting signals from the region of interest. Segmenting the region of interest can include determining information related to the size of the region of interest, such as volume, diameter, or any other suitable measurement. In some embodiments, due to the variability in size, shape, position, and composition of different regions of the brain, it can be challenging to apply the same techniques to detect, localize, and/or segment different regions of interest.
Accordingly, the inventors have developed techniques for detecting, localizing, and/or segmenting anatomical structures in the brain. The techniques can include receiving a signal detected from a brain of a person and providing data from the detected signal as input to a machine learning model to obtain an output indicating an existence, location, and/or segmentation of an anatomical structure in the brain. For example, the anatomical structure can include a ventricle, at least a portion of the circle of Willis, a blood vessel, musculature, and/or vasculature.
In some embodiments, once a region of interest has been identified, using any suitable technique, it may be desirable to take measurements and/or to monitor the region of interest. Monitoring the region of interest over any period of time may involve focusing on the region of interest (e.g., as opposed to probing other regions of the brain). Furthermore, to ensure accuracy and precision of such monitoring and measurements, it can be important to “lock” onto the region of interest to avoid detecting signals from other regions of the brain. For example, locking onto the region of interest may include focusing on the region of interest to detect signals from the region of interest, as opposed to detecting signals from other regions of the brain. However, the position, shape, and size of features in the brain tend to vary between different people, making it challenging to identify clear boundaries of the region of interest for a particular individual. For example, such variances may occur between people of different ages and people of different genders. As a result, techniques that include focusing on a region of interest based on prior knowledge or based on data acquired from the brains of other people may be associated with some error.
Accordingly, in some aspects, the inventors have developed techniques for detecting a signal from and locking onto a region of interest of the brain. The techniques include receiving a signal detected from a brain of a person and determining a position of the region of interest based on data from the signal and an estimate position of the region of interest. For example, the data can include image data, a quality of the first signal, and/or any other suitable data. The estimate position can be determined based on previous knowledge of the position, based on anatomical structures detected in the brain, based on output of a machine learning model, or by any suitable means, as aspects of the technology described herein are not limited in this respect. Determining the position of the region of interest can include providing the data from the first signal and the estimate position as input to a machine learning model to obtain, as output, the position of the region of interest. Based on the position output by the machine learning model, the method can further include transmitting an instruction to a transducer to detect a signal from the region of interest of the brain. In some embodiments, a signal quality may be improved when detecting the signal from the region of interest.
Inadvertent movement of a subject may cause a probe that is fixed to the subject's head to become dislodged, disrupting monitoring or measurements of the region of interest. Furthermore, a beam formed for detecting a signal from a region of interest could gradually shift with respect to the transducer, or a contact quality may change. In these cases, the device may no longer be configured to detect signals from a region of interest of the brain. Rather, the device could begin to detect signals from other regions of the brain, interrupting the continuous monitoring of features in the region of interest and/or interfering with measurements being obtained of features in the region of interest.
Accordingly, in some aspects, the inventors have developed techniques for estimating a shift associated with a signal detected from a brain of a person. In some aspects, the shift is indicative of a change in position from which the signal was detected with respect to a position of a region of interest of the brain of the person. For example, the shift may be due to a change in position of hardware used for detecting the signal from the region of interest and/or a shift in a beam formed by the transducer for detecting the signal from the region of interest. In some aspects, for detecting a change in position of hardware, the techniques can include analyzing image data and/or pulse-wave (PW) Doppler data associated with the detected signal. In some aspects, for detecting a shift in a beam formed by the transducer, the techniques can include analyzing statistical features of signals detected over time and determining whether a shift corresponds to a physiological change.
In some embodiments, the beam-steering techniques described herein can be used in conjunction with an acousto-encephalography (or AEG) system, an ultrasound system, and/or any system that passively or actively utilizes sound waves. An exemplary AEG system is described herein, including with respect to
In some aspects, an AEG device described herein can be a smart, noninvasive, transcranial ultrasound platform for measuring brain vitals (e.g., pulse, pressure, flow, softness) that can diagnose and monitor brain conditions and disorders. The AEG device improves over conventional neuromonitoring devices because of features, including but not limited to, being easy-to-use (AEG does not require prior training or a high degree of user intervention) and being smart (AEG is empowered by an AI engine that account for the human factor and as such minimize any errors). It also improves the reliability or accuracy of the measurements. This expands its use cases beyond what is possible with conventional brain monitoring devices. For example, with portable/wearable stick-on probes, the AEG device can be used for both continuous monitoring and/or rapid screening.
In some embodiments, the AEG device is capable of intelligently steering ultrasound beams in the brain in three dimensions (3D). With 3D beam-steering, AEG can scan and interrogate various regions in the cranium, and assisted with AI, it can identify an ideal region of interest. AEG then locks onto the region of interest and conducts measurements, while the AI component keeps correcting for movements and drifts from the target. The AEG device operates through three phases: 1-Lock, 2-Sense, 3-Track.
During Lock, AEG, at a relatively low repetition rate, may “scan” the cranium to identify and lock onto the region of interest, by using AI-based smart beam-steering that utilizes progressive beam-steering to narrow down the field-of-view to a desired target region, by exploiting a combination of various anatomical landmarks and motion in different compartments. Different types of regions of interest may be determined by the “presets” in a web/mobile App such as different arteries or beating at a specific depth in the brain. The region of interest can be a single point, relatively small volume, or multiple points/small volumes at one time. The latter is a unique capability that can probe propagating phenomena in the brain, such as the pulse-wave-velocity (PWV).
During Sense, the AEG device may measure ultrasound footprints of different brain compartments using different pulsation protocols at a much higher repetition rate, to support pulsatile mode, to take the pulse of the brain. The AEG device can also measure continuous wave (CW)-, pulse wave (PW)-, and motion (M)-modes to look at blood flow and motion at select depths.
During Track, the AEG device may utilize a feedback mechanism to evaluate the quality of the measurements. Once the device detects misalignment and misdetection, it goes back to state 1, to properly re-lock onto the target region.
In some embodiments, the AEG device includes core modes of measurements and functionalities, including ability to take the pulse of the brain, ability to measure pulse wave velocity (PWV) by probing multiple regions of interest at one time, and ability to measure other ultrasound modes in the brain, including B-mode (brightness-mode) and C-mode (cross-section mode), blood velocity using CW (continuous-wave) and PW (pulse-wave) doppler, color flow imaging (CFI), PD (power-doppler), M-mode (motion-mode), and blood flow (volume rate).
In some embodiments, the AEG device undertakes a unique approach to estimate intracranial pressure (ICP) based on pulsatility, blood flow, and strain in the brain. The algorithms are built upon a physics-based mathematical model and are augmented with machine learning algorithms. To show the efficacy and train the machine learning algorithm, a clinical study may be performed on a cohort of patients.
In some embodiments, the AEG device can directly measure stiffness in the brain by looking at the time profile of pulsatility and changes in blood flow in the brain. Further, the AEG device can visualize anatomical structures in the brain in 2D and 3D. The AEG device may be equipped with AI for real-time diagnosis of brain health and conditions utilizing vitals in a data-analytics framework to make various diagnoses. The AEG device may use a machine learning model to improve utility and help with critical decision making.
In some embodiments, the AEG is configured to treat the brain of a person using ablation, neuromodulation, ultrasound guided ultrasound (USgUS) treatment, ultrasound guided high intensity focused ultrasound (USgHIFU), and/or drug delivery through a blood brain barrier of the brain. For example, the AEG may be used to directly open the blood brain barrier for drug delivery. In some embodiments, this may include using the AEG to guide an external device for to treatment using the drug delivery through the blood brain barrier.
In some embodiments, AEG, and the techniques described herein, may augment and/or be applicable to systems for brain monitoring and/or treatment using different types of signals, such as acoustic signals, ultrasound imaging, optical imaging, functional near infrared spectroscopy (fNIRS) imaging, computed tomography (CT) imaging, magnetic resonance imaging (MR) imaging, micro-wave and mm-wave sensing and imaging, photoacoustic signals, electroencephalogram (EEG) signals, magnetoencephalogram (MEG) signals, radio frequency (RF) signals, and/or any other suitable signals.
In some aspects, the AEG device can include a hub and multiple probes to access different brain compartments such as temporal and suboccipital from various points over the head. The hub hosts the main hardware, e.g., analog, mixed, and/or digital electronics. The AEG device can be wearable, portable or an implantable (i.e., under the scalp or skull). In a fully wearable form, the AEG device can also be one or several connected small patch probes. Alternatively, the AEG device can be integrated into a helmet or cap. The AEG device can be wirelessly charged or be wired. It can transfer data wired or wirelessly to a host that can be worn (such as a watch or smart phone), bedside/portable (such as a patient monitor) or implanted (such as a small patch over the neck/arm) and/or to a remote platform (such as a cloud platform). AEG devices may be coupled with acoustic or sound conducting gels (or other materials) or can sense acoustic signals in air (airborne).
An illustrative system/hardware architecture for an AEG system can include a network of probes for active or passive sensing of brain metrics that are connected to front-end electronics. The front-end electronics may include transmit and receive circuitry, which can include analog and mixed circuit electronics. The front-end electronics can be connected to digital blocks such as programmable logic, a field-programmable gate array (FPGA), processor, and a network of memory blocks and microcontrollers to synchronize, control, and/or pipe data to other subsystems including the front-end and a host system such as a computer, tablet, smartphone, or cloud platform. Programmable logic may provide flexibility in updating the design and functionality over time by updating firmware/software without having to redesign the hardware.
In some aspects, the AEG device includes probes that are acoustic transducers, such as piezoelectric transducers, capacitive micromachined ultrasonic transducers (CMUTs), piezoelectric micromachined ultrasonic transducer (PMUTs), electromagnetic acoustic transducers (EMATs), and other suitable acoustic transducers. Among the feasible techniques for exciting the modes of the skull-brain are direct-surface bonded transducers, wedge transducers, and interdigital transducers/comb transducers. Material and dimensions may determine the bandwidth and sensitivity of the transducer. CMUTs are of particular interest as they can be easily miniaturized even at low frequencies, have superior sensitivity as well as wide bandwidth.
In some embodiments, the CMUT consists of a flexible top plate suspended over a gap, forming a variable capacitor. The displacement of the top plate creates an acoustic pressure in the medium (or vice versa; acoustic pressure in the medium displaces the flexible plate). Transduction is achieved electrostatically, by converting the displacement of the plate to an electric current through modulating the electric field in the gap, in contrast with piezoelectric transducers. The merit of the CMUT derives from having a very large electric field in the cavity of the capacitor, a field of the order of 108 V/m or higher results in an electro-mechanical coupling coefficient that competes with the best piezoelectric materials. The availability of micro-electro-mechanical-systems (MEMS) technologies makes it possible to realize thin vacuum gaps where such high electric fields can be established with relatively low voltages. Thus, viable devices can be realized and even integrated directly on electronic circuits such as complimentary metal-oxide-semiconductor (CMOS).
In some embodiments, a further aspect is collapse mode operation of the CMUT. In this mode of operation, the CMUT cells are designed so that part of the top plate is in physical contact with the substrate, yet electrically isolated with a dielectric, during normal operation. The transmit and receive sensitivities of the CMUT are further enhanced thus providing a superior solution for ultrasound transducers. In short, the CMUT is a high electric field device, and if one can control the high electric field from issues like charging and breakdown, then one has an ultrasound transducer with superior bandwidth and sensitivity, amenable for integration with electronics, manufactured using traditional integrated circuits fabrication technologies with all its advantages, and can be made flexible for wrapping around a cylinder or even over human tissue.
It should be appreciated that the above-described AEG system is an exemplary system with which the smart-beam steering techniques described herein can be used. In particular, the smart-beam steering techniques, described herein including with respect to
In some aspects, the beam-steering techniques described herein can be used to autonomously steer acoustic beams (e.g., ultrasound beams) in the brain. The techniques can be used to identify and lock on regions of interest, such as different tissue types, vasculature, and/or physiological abnormalities, while correcting for movements and drifts from the target. The techniques can further be used to sense, detect, diagnose, and monitor brain functions and conditions, such as epileptic seizure, intracranial pressure, vasospasm, and hemorrhage.
The transducer 604 may be configured to receive and/or apply to the brain an acoustic signal. In some embodiments, the acoustic signal includes any physical process that involves the propagation of mechanical waves, such as acoustic, sound, ultrasound, and/or elastic waves. In some embodiments, receiving and/or applying to the brain an acoustic signal involves forming a beam and/or utilizing beam-steering techniques, further described herein. In some embodiments, the transducer 604 may be disposed on the head of the person in a non-invasive manner.
The processor 604 may be in communication with the transducer 602. The processor 604 may be programmed to receive, from the transducer 602, the acoustic signal detected from the brain and to transmit an instruction to the transducer 602. In some embodiments, the instruction may indicate a direction for forming a beam for detecting an acoustic signal and/or for applying to the brain an acoustic signal. In some embodiments, the processor 602 may be programmed to analyze data associated with the acoustic signal to detect and/or localize structures and/or motion in the brain, such as different anatomical landmarks, tissue types, musculature, vasculature, blood flow, brain beating, and/or physiological abnormalities. In some embodiments, the processor 602 may be programmed to analyze data associated with the acoustic signal to determine a segmentation of different structures in the brain, such as the segmentation of different tissue types and/or vasculature. In some embodiments, the processor 602 may be programmed to analyze data associated with the acoustic signal to sense and/or monitor brain metrics, such as intracranial pressure, cerebral blood flow, cerebral profusion pressure, and intracranial elastance.
In some embodiments, the transducer e.g., transducer 602) may be configured for transmit- and/or receive-beamforming. The transducer may include transducer elements that are each configured to transmit waves (e.g., acoustic, sound, ultrasound, elastic, etc.) in response to being electrically excited by an input pulse. Transmit beamforming involves phasing (or time-delaying) the input pulses with respect to one another, such that waves transmitted by the elements constructively interfere in space and concentrate the wave energy into a narrow beam in space. Receive-beamforming involves reconstructing a beam by synthetically aligning waves that arrive at and are recorded by the transducer elements with different time delays.
In some embodiments, the functions of a processor (e.g., processor 604) may include generating transmit timing and possible apodization (e.g., weighting, tapering, and shading) during transmit-beamforming, supplying the time delays and signal processing during receive-beamforming, supplying apodization and summing of delayed echoes, and/or additional signal processing-related activities. In some embodiments, it may be desirable to create a narrow and uniform beam with low sidelobes over a long depth. During both transmit and receive operations, appropriate time delays may be supplied to elements of the transducer to accomplish appropriate focusing and steering.
The direction of transmit- and/or receive-beamforming may be changed using beam-techniques. For example, the direction for forming a beam (e.g., beamforming) may be changed by changing the set of time-delays applied to the elements of the transducer. Beam-steering may be performed by any suitable transducer, e.g., transducer 602 to change the direction for forming the beam.
In some embodiments, the beam may be steered in any suitable direction in any suitable order. For example, the beam may be steered left to right, right to left, start at elevation first, and/or start at azimuthal first.
In some embodiments, a transducer consists of multiple transducer elements arranged into an array (e.g., a one-dimensional array or a two-dimensional array). Beam-steering may be conducted by a one-dimensional array over a two-dimensional plane using any suitable architecture. For example, as shown in
At 802, the techniques include receiving a first signal detected from the brain. In some embodiments, the transducer detects the signal after forming a first beam (e.g., receive- and/or transmit-beamforming) in a first direction. In some embodiments, the first direction may be a default direction, a direction determined using the techniques described herein including with respect to
At 804, the techniques include providing the data (e.g., raw data and/or processed data) from the first signal as input to a trained machine learning model. At 806, the trained machine learning model may output the direction, with respect to the brain of a person, for forming the beam to detect the signal from the region of interest.
In some embodiments, the trained machine learning model may process the data from the first signal to determine a predicted position of the region of interest relative to the current position (e.g., the position of the region of the brain from which the first signal was detected). In some embodiments, this may include processing the data to detect anatomical landmarks (e.g., ventricles, vasculature, blood vessels, musculature, etc.) and/or motion (e.g., blood flow) in the brain, which may be exploited to determine the predicted position of the region of interest. Based on the predicted position, the machine learning model may determine the direction for forming the second beam and detecting the signal from the region of interest. Machine learning techniques for determining a direction for forming a beam and detecting a signal from the region of interest are described herein including with respect to
In some embodiments, the machine learning model may be trained on prior signals detected from the brain of one or more persons. The training data may include data generated using machine learning techniques such as Variational Autoencoders (VAE) and Generative Adversarial Networks (GANS) and/or physics based in-silica (e.g., simulation-based) models. An illustrative process for constructing and deploying a machine learning algorithm is described herein including with respect to
At 806, based on the output from the machine learning model, the processor, e.g., processor 604, transmits an instruction to the transducer to detect the signal from the region of interest by forming a beam in the determined direction. In some embodiments, forming a beam (e.g., transmit- and/or receive-beamforming) in the determined direction may include forming a single beam, forming multiple beams, firming beams over a two-dimensional plane, and/or forming beams over a sequence of two-dimensional planes. In some embodiments, the direction of the beam may include the angle of the beam with respect to the face of the transducer.
In some embodiments, detecting the signal from the region of interest of the brain may include autonomously monitoring the region of interest. This may include, for example, monitoring the region of interest using one or more ultrasound sensing modalities, such as pulsatile-mode (P-mode), continuous wave (CW) Doppler, pulse wave (PW)-Doppler, pulse-wave-velocity (PWV), color-flow imaging (CFI), Power Doppler (PD), and/or motion mode (M-mode). In some embodiments, detecting the signal from the region of interest of the brain may include processing the signal to determine the existence and/or the location of a feature in the brain. For example, this may include determining the existence and/or location of an anatomical abnormality and/or anatomical structure in the brain. In some embodiment, detecting the signal from the region of interest of the brain may include processing the signal to segment a structure in the brain, such as, for example, ventricles, blood vessels and/or musculature. In some embodiments, detecting the signal from the region of interest of the brain may include processing the signal to determine one or more brain metrics, such as an intracranial pressure (ICP), cerebral blood flow (CBF), cerebral profusion pressure (CPP), and/or intracranial elastance (ICE). In sonic embodiments, detecting the signal from the region of interest may correct for beam aberration.
In some embodiments, the region of interest of the brain may include any suitable region(s) of the brain, as aspects of the technology described herein are not limited in this respect. In some embodiments, the region of interest may depend on the intended use of the techniques described herein. For example, for determining a distribution of motion in the brain, a large region of the brain may be defined as the region of interest. As another example, for determining whether there is an embolism in an artery of the brain, a small and precise region may be defined as the region of interest. As yet another example, for measuring blood flow in a blood vessel, two different regions of the brain may be defined as the regions of interest. In some embodiments, an suitable region of any suitable size may be defined as the region of interest, as aspects of the technology are not limited in this respect.
In some embodiments, in identifying a position of a region of interest, the techniques may include detecting, localizing, and/or segmenting anatomical structures in the brain. In addition to aiding in the identification of the region of interest, the results of detection, localization, and segmentation may be useful fur informing diagnoses, determining one or more brain metrics, and/or taking measurements of the anatomical structures. Techniques for detecting, localizing, and/or segmenting anatomical structure in the brain are described herein including with respect to
At 812, the techniques include receiving a signal detected from the brain of a person. In some embodiments, the signal may be received from a transducer (e.g., transducer 602) configured to detect a signal from a region of interest. For example, the autonomous beam-steering techniques described herein, including with respect to
At 814, data from the detected signal is provided to a machine learning model to obtain an output indicating the existence, location, and/or segmentation of the ventricle. In some embodiments, the data includes image data, such as brightness mode (B-mode) image data.
In some embodiments, the machine learning model may be configured, at 814a, to cluster the image data to obtain a plurality of clusters. For example, the image data may be clustered based on pixel intensity, proximity, and/or using any other suitable techniques as embodiments of the technology described herein are not limited in this respect.
At 814b, the machine learning model is configured to identify, from among the plurality of clusters, a cluster that represents the ventricle. In some embodiments, the cluster may be identified based on one or more features of the clusters. For example, features used for identifying such a cluster may include a pixel intensity, a depth, and/or a shape associated with the cluster. In some aspects, the features associated with a cluster may be compared to a template of the region of interest. For example, the template may define expected features of the cluster that represents the ventricle such as an estimate pixel intensity, depth, and/or shape. The template may be determined based on data obtained from the brains of one more reference subjects. In some aspects, the techniques may include identifying a cluster that has features that to are similar to those of the template.
At 822, the techniques include receiving a first signal detected from the brain of a person. In some embodiments, the first signal may be received from a transducer (e.g., transducer 602) configured to detect a signal from a region of interest. For example, the autonomous beam-steering techniques described herein including with respect to
At 824, data from the first signal is provided to a machine learning model to obtain an output indicating the existence, location, and/or segmentation of a first portion of the circle of Willis. In some embodiments, the data includes image data, such as, for example, B-mode image data and/or CFI data. In some embodiments, segmenting the first portion of the circle of Willis may include using the techniques described herein including at least with respect to act 814 of flow diagram 810. For example, the machine learning model may be configured to cluster image data and compare features of each cluster to those of a template of the first portion of the circle of Willis.
At 826, the method includes obtaining a segmentation of a second portion of the circle of Willis. In some aspects, the second portion of the circle of Willis may be segmented according to the techniques described herein including with respect to act 824. As a non-limiting example, the first portion of the circle of Willis may include the left middle cerebral artery (MCA), while the second portion of the circle of Willis may include the right internal carotid artery (ICA). Additionally or alternatively, a portion of the circle of Willis may include the right MCA, the left ICA, or any other suitable portion of the circle of Willis, as embodiments of the technology described herein are not limited in this respect.
A segmentation of the circle of Willis may be obtained at 828 based at least in part on the segmentations of the first and second portions of the circle of Willis. For example, obtaining the segmentation of the circle of Willis may include fusing the segmented portions.
In some embodiments, the method 820 includes segmenting the circle of Willis in portions (e.g., the first portion, the second portion, etc.), rather than in its entirety, due to its size and complexity. However, the techniques described herein are not limited in this respect and may be used to segment the whole structure, as opposed to segmenting separate portions before fusing them together.
At 832, the techniques include receiving a signal detected from the brain of a person. In some embodiments, the signal may be received from a transducer (e.g., transducer 602) configured to detect a signal from a region of interest. For example, the autonomous beam-steering techniques described herein, including with respect to
At 834, data from the detected signal is provided to a machine learning model to obtain an output indicating the location of the blood vessels. In some embodiments, the date comprises image data, such as brightness mode (B-mode) image data and/or color flow image (CFI) image data.
In some embodiments, the machine learning model is configured, at 834a, to extract a feature from the provided data. In some embodiments, an extracted feature may include features that are scale and/or rotation invariant. In some embodiments, the features may be extracted utilizing the middle layers of a pre-trained neural network model, examples of which are provided herein.
At 834b, the extracted features are compared to features extracted from a template of the vessel. In some embodiments, the template may be based on data previously-obtained from the brains of one or more subjects. The results of the comparison may be used to identify the location of the vessel with respect to the image data. In some embodiments, identifying the location based on scale and/or rotation invariant features may help to identify a location with minimal vessel variations. In some embodiments, additional data may be acquired based on the identified location of the vessel (e.g., additional B-mode and/or CFI frames), which may be used for taking subsequent measurements of the vessel and/or blood flow in the vessel.
As described above, features of a region of interest, such as the size, shape, and position, may vary between different people. Thus, it may not be possible to estimate the precise position of the region of interest for each individual based on prior knowledge or training data. For example, the techniques described herein, including with respect to
At 842, the techniques include receiving a first signal detected from a brain of a person. In some embodiments, the signal may be detected by a transducer (e.g., transducer 602) forming a beam in a specific direction. For example, the direction may be determined by a user, based on output from a machine learning model (e.g., described herein including with respect to
At 844, the data from the first signal, as well as an estimate of a position of a region of interest, are provided as input to a machine learning model. For example, the data from the first signal may include B-mode image data, CH data, PW Doppler data, raw beam data, or any suitable type of data related to the detected signal, as embodiments of the technology are not limited in this respect. In some embodiments, the data from the signal may be indicative of a current region from which the transducer is detecting the signal. The estimate position of the region of interest may be determined based on prior physiological knowledge, prior data collected from the brain of another person or persons, output of a machine learning model, output of techniques described herein including at least with respect to
At 846, a position of the region of interest is obtained as output from the machine learning model. For example, the machine learning model may include any suitable reinforcement-learning technique for determining the position of the region of interest. In some embodiments, the determined position of the region of interest, output by the machine learning model, may be another estimate position of the region of interest (e.g., not the exact position of the regions of interest).
At 848, an instruction is transmitted to a transducer to detect a second signal from the region of interest of the brain based on the determined position of the region of interest. In some embodiments, the instruction includes a direction for forming a beam to detect a signal from the region of interest. For example, the direction may be determined based on the output of the machine learning model (e.g., the position of the region of interest) and/or as part of processing data using the machine learning model. In some embodiments, as described above, the determined position of the region of interest may also be an estimate position of the region of interest. Therefore, the instruction may instruct the transducer to detect the second signal from the estimate position of the region of interest determined by the machine learning model, rather than an exact position of the region of interest. In some embodiments, the quality of the second signal may be an improvement over the quality of the first signal. For example, the second signal may have a higher signal-to-noise ratio (SNR) than that of the first signal.
As described above, after locating and/or locking onto a region of interest, it may be desirable to continue to detect signals from the region of interest. However, over time, a signal may no longer be detected from the desired region. For example, due to patient movement, a stick-on probe may become dislodged or slip from its original position. Additionally or alternatively, the beam may gradually shift with respect to the initial direction in which it was formed. Therefore, the techniques described herein provide for techniques for addressing any hardware and/or beam shifts.
At 852, the techniques include receiving a signal detected from a brain of a person. The signal is detected by a transducer (e,g., transducer 602) forming a beam in a specified direction. For example, the direction may be determined by a user, based on output from a machine learning model (e.g., described herein including with respect to
At 854, the techniques include analyzing image data and/or pulse wave (PW) Doppler data associated with the detected signal to estimate a shift associated with the detected signal. In some embodiments, the techniques may include one or more processing steps to process data associated with the signal to obtain B-mode image data and/or PW Doppler data. In some embodiments, analyzing the image data and/or PW Doppler data may include one or more steps. For example, the image data may be analyzed in conjunction with the PW Doppler data to indicate a current position and/or possible angular beam shifts that occurred during signal detection. Additionally or alternatively, a current image frame may be compared to a previously-acquired image frame to estimate a change in position of the region of interest within the image frames over time.
At 856, the techniques include outputting the estimated shift. For example, the estimated shift may be used as input to a motion prediction and compensation framework, such as a Kalman filter. This may he used to adjust the beam angle to correct for angular shifts, such that the transducer continues to detect signals from a region of interest. Additionally or alternatively, feedback indicative of the estimated shift may be provided through a user interface. For example, based on the feedback, a user may correct for shifts when the hardware does not have the capability.
At 862, the techniques include receiving a signal detected from a brain of a person. The signal is detected by a transducer forming a beam in a specified direction. For example, the direction may be determined by a user, based on output from a machine learning model (e.g., described herein including with respect to
At 864, the techniques include estimating a shift associated with the detected signal. The techniques for estimating such a shift include acts 864a and 864b, which may be performed contemporaneously, or in any suitable order.
At act 864a, statistical features associated with the detected signal are compared with statistical features associated with a previously-detected signal. In some embodiments, the techniques may include estimating a shift based on the comparison of such features. At 864b, a signal quality of the detected signal is determined. For example, the signal quality may be determined based on the statistical features of the detected signal and/or based on data (e.g., raw beam data) associated with the detected signal. In some embodiments, the output at acts 864a and 864b may be considered in conjunction with one another to determine whether an estimated shift is due to a physiological change.
The flow diagram 860 may proceed to act 866 when it is determined that the estimated shift is not due to a physiological change. At act 866, the techniques include providing an output indicative of the estimated shift. For example, the output may be used to determine an updated direction for forming a beam to correct for the shift. Additionally or alternatively, the output may be provided as feedback to a user. The user may be prompted by the feedback to correct for the shift when the hardware does not have this capability.
Some aspects of the technology relate to beam-steering techniques for initially identifying a region of interest. In some embodiments, a beam-steering technique informs the direction for forming the first beam (e.g., the first signal detected at 802 of flow diagram 800) and the number of beams to be formed by the transducer (e.g., a single beam, a two-dimensional plane, a sequence of two-dimensional volumes, a three-dimensional volume, etc.) at one time. In some embodiments, the beam-steering techniques may involve iterating over multiple regions of the brain (e.g., detecting and processing signals from those regions using the machine learning techniques described herein), prior to identifying the region of interest.
Randomized Beam-Steering 920. In some aspects, the techniques utilize beam-steering at random directions to progressively narrow down the field-of-view to a desired target region, by exploiting a combination of various anatomical landmarks and motion in different compartments. In some embodiments, the machine learning techniques may determine the order in which the sequence is conducted. The system may instantiate a search algorithm by an initial beam (e.g., transmitting and/or receiving an initial beam) that is determined by prior knowledge, such as the relative angle and orientation of the transducer probe with respect to its position on the head. Based on the received beam data at the current and previous states, the system may determine the next best orientation and region for the next scan.
Multi-level (or multi-grid) Beam-Steering 940. In some aspects, the techniques can utilize a multi-level or multi-grid search space to narrow down the field-of-view to a desired region of interest, starting from a coarse-grained beam-steering (i.e., large spacing/angles between subsequent beams) progressively; narrowed down to a finer spacing and angle around the region of interest. The machine teaming techniques may determine the degree and area during the grid-refinement process.
Sequential Beam-Steering 960. In some aspects, the techniques can utilize a sequential beam steering in which case the device steers beams sequentially (in a specific order) over a two-dimensional plane, a sequence of two-dimensional planes positioned or oriented differently in a three-dimensional space, or a three-dimensional volume. The machine learning techniques may determine the order in which the sequence is conducted. With beam-steering merely over a two-dimensional plane or over a three-dimensional volume, the techniques may analyze a full set of beam indices/angles in two dimensions or three dimensions and determine which of the many beams scanned is a fit for the next beam. With a sequence of two-dimensional planar data and/or images (i.e., frame), the techniques may analyze consecutive frames one after another and determine the next two-dimensional plane over which the scan may be conducted.
As described herein, including with respect to 804 of flow diagram 800, a processor may receive, from a transducer, data indicative of a signal detected from the brain. In some embodiments, the processor may process the data according to one or more processing techniques. For example, as shown in
Processing pipeline 1020 shows example processing techniques for B-mode imaging, CFI, and PW Doppler data. For each modality, raw beam data 1004 may undergo time gain compensation (TGC) 1006 to compensate for tissue attenuation. In some embodiments, the data may further undergo filtering 1008 to filter out unwanted signals and/or frequencies. In some embodiments, demodulation 1010 may be performed to remove carrier signals.
After demodulation 1010, processing techniques may vary among the different modalities. As shown, for B-mode imaging, the data may undergo envelope detection 1012 and/or logarithmic compression 1014. In some embodiments, logarithmic compression 1014 may function to adjust the dynamic range of the B-mode images. In some embodiments, the data may then undergo scan conversion 1016 for generating B-mode images. Finally, any suitable techniques 1018 may be used for post-processing the scan converted images.
For CFI, the data may undergo phase estimation 1024, which may be used to inform velocity estimation 1026. In some embodiments, after velocity estimation 1024, the data may undergo scan conversion 1016 to generate CF images. Any suitable techniques 1018 may be used for post-processing the scan converted CF images.
For PW Doppler data, the demodulated data may similarly undergo phase estimation 1024. In some embodiments, a fast Fourier transform (fft) 1028 may be applied to the output of phase estimation 1024, prior to generating sonogram 1030.
In some embodiments, any suitable data (e.g., data acquired from any point in pipeline 1020) may be used as input to machine learning techniques 1044, 1064 for determining the beam-steering strategy 1046, 1066 (e.g., the direction of beamforming for detecting the signal from the region of interest). For example, raw channel or beam data 1042 may be used as input to pipeline 1040, while B-mode and CFI data 1062 may be used as input to pipeline 1060. Other non-limiting examples of input data may include demodulated I/Q data, pre-scan conversion beam data, and scan-converted beam data.
In some embodiments, the machine learning techniques 1044, 1064 may include one or more machine learning techniques that inform the beam-steering strategy 1046, 1066. For to example, the machine learning techniques may include techniques for detecting a region of interest, localizing a region of interest, segmenting one or more anatomical structures, locking on a region of interest, correcting for movement due to shifts in hardware, correcting movement due to shifts in the beam, and/or any suitable combination of machine learning techniques. Machine learning techniques are further described herein including with respect to
In some embodiments, the signals detected during beam-steering, regardless of the technique, may be used to determine a current probing location from which the signals were detected. In some embodiments, the current probing location may be used to assist in detecting, locating, and/or segmenting a region of interest. The inventors have recognized that it can be challenging to determine a probing location based on observation alone, since structural landmarks in B-mode images can be subtle and easy to lose with the naked eye. Further, a full field-of-view three-dimensional space may be relatively large compared to some regions of interest. The inventors have therefore developed AI-based techniques that can be used to analyze beam data to identify the current probing location and/or guide the user and/or hardware towards the region of interest. In some embodiments, the AI-based techniques may be based on prior general structural knowledge provided in the system. For example, the AI-based techniques may exploit structural features (e.g., anatomical structures) and changes in structural features (e.g., motion) to determine a current probing position (e.g., the position of the region of the brain from which a first signal was detected).
In some embodiments the AI techniques may include using a deep neural network (DNN) framework, trained using self-supervised techniques, to predict the position of a region of interest. Self-supervised learning is a method for training computers to do tasks without labelling data. It is a subset of unsupervised learning where outputs or goals are derived by machines that label, categorize, and analyze information on their own, then draw conclusions based on connections and correlations. In some embodiments, the DNN framework may be trained to predict the relative position of two regions in the same image. For example, the DNN framework may be trained to predict the position of the region of interest with respect to an anatomical structure in a B-mode and/or CF image.
In some embodiments, the DNN framework may be trained both on two-dimensional and three-dimensional images and/or four-dimensional spatiotemporal data (two- or three-dimensions for space and one-dimension for time). In some embodiments, training the DNN framework may involve obtaining a template for the region of interest. To obtain a template, a disentangling neural network may be trained to extract the region of interest structure and subject-dependent variabilities and combine them to estimate a region of interest shape for a “test” subject.
In some embodiments, given a template of the region of interest and detected signal data (e.g., B-mode image data, CFI data, etc.) from the current probing position, the trained DNN framework may output an indication of the existence of a region of interest, a position of the region of interest with respect to the current probing position, and/or a segmentation of the region of interest. In some embodiments, the output may include a direction for forming a beam for detecting signals from the region of interest. The processor may provide instructions to the transducer to detect a signal from the region of interest by forming a beam in the determined direction.
Due to variability in size, shape, and orientation of structures in the brain (e.g., ventricles, blood vessels, brain tissue, etc.), the AI-based techniques, as described herein above, may be adapted to detect, localize, and/or segment specific structures in the brain.
Ventricle Detection, Localization, and Segmentation. In some embodiments, the techniques described herein may be used to detect, localize, and/or segment ventricles.
For example, as shown in the flow diagram 1560 of
In some embodiments, the segmentation techniques may be used to detect plateaus in the filtered image, while maintaining spatial compactness. An example segmentation algorithm is described by Kim et. al. (Improved simple linear iterative clustering super pixels. In 2013 IEEE ISCE, pages 259-260. IEEE, 2013.), which is incorporated herein by reference in its entirety. In some embodiments, this algorithm generates super-pixels by clustering pixels based on their color similarity and proximity in the image plane. This may be done in the five-dimensional [labxy] space, where [lab] is the pixel color vector in CIELAB color space and xy is the pixel position. An example distance measure (Equation 3) is described by Doersch et. al. (Unsupervised visual representation learning by context prediction. In Proc. IEEE International Conference on Computer Vision, pages 1422-1430, 2015.), which is incorporated herein by reference in its entirety.
Here, s represents an estimate of super-pixel size which may be computed as the square root ratio of N number of pixels in image and k number of super-pixels. An example of a segmented. image is shown at 1568 of flow diagram 1560.
In some embodiments, the target segment (e.g., the ventricle) may include a set of characteristics (e.g., location prior, shape prior, etc.) that may be leveraged during detection. For example, discriminating features may include (a) average pixel intensity, (b) depth, and (c) shape. To incorporate the depth prior (score), a Gaussian kernel may be formed (e.g., σ=nd/5 centered at nd/2, for an example of 5 samples), as ventricles are estimated to be positioned at about the center of the head, with a peak value normalized to one. This one-dimensional vector may then be scan converted to the ultrasound image space. Accordingly, the depth score may be computed as the average of kernel values belonging to that cluster. Flow diagram 1560 illustrates, at 1570, example depth scores (top), calculated according to the techniques described herein. As shown, clusters located near central depths in the image may have a higher score than those clusters located at shallower and/or deeper depths.
In some embodiments, pixels that belong to ventricles may have relatively lower or higher intensity than other pixels. In some embodiments, computing an intensity score for a cluster may include normalizing values to have a mean of zero and a standard deviation of one. The negative average intensity value for each cluster may be computed and transformed according to the nonlinearity in Equation 7, below:
As a result, clusters having a lower intensity may result in a higher score. Flow diagram illustrates, at 1570, example intensity scores (bottom), calculated according to the techniques described herein. As shown, clusters having a lower intensity may result in a higher intensity score.
In some embodiments, ventricles may be also viewed as a particular shape (e.g., shape prior). For example, the ventricles may be viewed as having a similar shape to that of a butterfly in a two-dimensional transcranial ultrasound image. In some embodiments, the shape may be used as a template for scale and invariant shape matching. After smoothing, the template may be used to extract a reference contour for shape scoring. In some embodiments, a contour may be represented as a set of points. For example, the contour may be represented as:
The center of the contour may be represented as:
In some embodiments, the contour distance curve may be formed by computing the Euclidean distance of every point in cntri to its center Oi. To mitigate the scale variability, every Di may be normalized to mD
Flow diagram 1560 illustrates, at 570, example shape scores (middle) for each of the clusters. In some embodiments, clusters that have a shape that resemble the shape prior (e.g., the butterfly) may result in a higher shape score.
In some embodiments, a final score may be computed for each cluster by computing the product of the depth, shape, and intensity scores. Example final scores are shown at 1572 of flow diagram 1560. The final selection may be performed by selecting an optimal (e.g., maximum, minimum, etc.) score that satisfies a threshold. For example, selecting a maximum score that exceeds a threshold of 0.75. An example final selection of a cluster is shown at 1574 of flowchart 1560. As shown, the selected cluster corresponds to the highest score from among the scores associated with clusters at 1572.
Circle of Willis Detection and Segmentation. In some embodiments, the techniques described herein may be used to detect, localize, and segment the circle of Willis.
As opposed to ventricle detection, localization, and segmentation, template matching methods may not be feasible for the circle of Willis, due to the large template that would be needed. Template matching methods may not work well for large templates because the speckle noise in the image may mislead the algorithm. Therefore, the circle of Willis may be detected, localized, and/or segmented using the methods described herein including at least with respect to
Additionally or alternatively, a second example method for detecting, localizing, and segmenting the circle of Willis may include applying techniques described herein for detecting, localizing, and segmenting blood vessels. In some embodiments, as described herein, including at least with respect to
Vessel Diameter and Blood Volume Rate. In some embodiments, techniques may be used to determine a vessel diameter and blood volume rate. The inventors have recognized that traditional matching methods used in computer vision are vulnerable to error in presence of slight shape changes, rotation, and scale. As a result, it may be challenging to determine such blood vessel metrics. The inventors have therefore developed techniques for finding (e.g., detecting and/or localizing) a vessel from B-mode and CF images based on template matching, while addressing these issues.
As a result, the techniques may obtain a set of frames from the region of interest that are well aligned even in the face of heartbeat, respiration, and probe-induced movements. In some embodiments, image enhancement techniques 1706 may be applied to the aligned region of interest. In some embodiments, averaging the frames may reduce the noise and result in good contrast between the vessel and background. Next, a two-component mixture of Gaussians may be used to cluster foreground and background pixels together. For example, the two components may include pixel value and pixel position. In some embodiments, a polynomial curve may be fit to the foreground and a mask may be created by drawing vertical lines of length r, centered at polynomial. To obtain the best fit, a parameter search 1708 may be conducted over polynomial order and r 1710. This may result in an analytical equation for vessel shape and vessel radius, output at 1712. In some embodiments, vessel shape discovery may also be useful in determining the beam angle to the blood-flow direction that improves PW measurement and accordingly the cerebral blood flow velocity estimates.
As described above, these techniques may be used to detect, localize, and/or segment the circle of Willis. For example,
In some embodiments, the detection and localization techniques described herein may help to determine an approximate position of a region of interest. However, due to variabilities among subjects (e.g., among the brains of subjects), there may be slight inaccuracies associated with the estimated position of the region of interest. In some embodiments, it may be desirable to address these inaccuracies and precisely lock onto the region of interest for an individual. In some embodiments, a fine-tuning mechanism may be deployed in a closed-loop system to precisely lock onto the region of interest. In some embodiments, the techniques may include analyzing one or more signals detected by the transducer to determine an updated direction for forming a beam for precisely detecting signals from the region of interest.
Active target tracking. The inventors have recognized that, during continuous recording, it can be challenging to keep the hardware on target, despite the closed-loop mechanisms for locking onto the region of interest. Shifts and/or drifts in the hardware (e.g., the transducer) may occur, even though the hardware may be designed to lock in place (e.g., on the region of interest) and keep a sturdy hold in position. In some embodiments, a live tracking system may be used to address hardware shifts and/or drifts based on a Kaltman filter.
Course Correcting Component. The inventors have further recognized that, though the techniques may lock the system on target, the beam may gradually shift, or the contact quality may change during the course of measurement. To address this, in some embodiments, the techniques may monitor the signal quality and, upon observing a statistical shift that does not translate to physiological changes, it may (a) perform a limited search around the region of interest to fix the limited shift without interrupting the measurements, and/or (b) upon observing substantial dislocations, engages the reinforcement-learning algorithm for realigning and/or alerting the user of contact issues if the search was unsuccessful.
Autonomous Sensing
In some embodiments, once locked onto a region of interest, the system (e.g., system 600) may continuously and/or autonomously monitor the region of interest using any suitable ultrasound modality. For example, ultrasound modalities may include continuous wave (CW) Doppler, pulse wave (PW) Doppler, pulsatile-mode (P-mode), pulse-wave-velocity (PWV), color flow imaging (CFI), power Doppler (PD), motion mode (M-mode), and/or any other suitable ultrasound modality, as aspects of the technology described herein are not limited in that respect.
Additionally or alternatively, once locked onto a region of interest, the system (e.g., system 600) may sense and/or monitor brain metrics from the region of interest. For example, brain metrics may include intracranial pressure (ICP), cerebral blood flow (CBF), cerebral perfusion pressure (CPP), intracranial elastance (ICE), and/or any suitable brain metric, as aspects of the technology described herein are not limited in this respect.
As described herein, AI can be used on various levels such as in guiding beam steering and beam forming, detection, localization, and segmentation of different landmarks, tissue types, vasculature and physiological abnormalities, detection and localization of blood flow and motion, autonomous segmentation of different tissue types and vasculature, autonomous ultrasound sensing modalities, and/or sensing and monitoring brain metrics, such as intracranial pressure, intracranial elastance, cerebral blood flow, and/or cerebral profusion.
In some embodiments, beam-steering may employ one or more machine learning algorithms in the form of a classification or regression algorithm, which may include one or more sub-components such as convolutional neural networks, recurrent neural networks such as LSTMs and GRUs, linear SVMs, radial basis function SVMs, logistic regression, and various techniques from unsupervised learning such as variational autoencoders (VAE), generative adversarial networks (GANs) which are used to extract relevant features from the raw input data.
Exemplary steps 1800 often undertaken to construct and deploy the algorithms described herein are shown in
The input layer 1904 may be followed by one or more convolution and pooling layers 1910. A convolutional layer may comprise a set of filters that are spatially smaller (e.g., have a smaller width and/or height) than the input to the convolutional layer (e.g., the input 1902). Each of the filters may be convolved with the input to the convolutional layer to produce an activation map (e.g., a 2-dimensional activation map) indicative of the responses of that filter at every spatial position. The convolutional layer may be followed by a pooling layer that down-samples the output of a convolutional layer to reduce its dimensions. The pooling layer may use any of a variety of pooling techniques such as max pooling and/or global average pooling. In some embodiments, the down-sampling may be performed by the convolution layer itself (e.g., without a pooling layer) using striding.
The convolution and pooling layers 1910 may be followed by fully connected layers 1912. The fully connected layers 1912 may comprise one or more layers each with one or more neurons that receives an input from a previous layer (e.g., a convolutional or pooling layer) and provides an output to a subsequent layer (e.g., the output layer 1908). The fully connected layers 1912 may be described as “dense” because each of the neurons in a given layer may receive an input from each neuron in a previous layer and provide an output to each neuron in a subsequent layer. The fully connected layers 1912 may be followed by an output layer 1908 that provides the output of the convolutional neural network. The output may be, for example, an indication of which class, from a set of classes, the input 1902 (or any portion of the input 1902) belongs to. The convolutional neural network may be trained using a stochastic gradient descent type algorithm or another suitable algorithm. The convolutional neural network may continue to be trained until the accuracy on a validation set (e.g., a held-out portion from the training data) saturates or using any other suitable criterion or criteria.
It should be appreciated that the convolutional neural network shown in
Convolutional neural networks may be employed to perform any of a variety of functions described herein. It should be appreciated that more than one convolutional neural network may be employed to make predictions in some embodiments. Any suitable optimization technique may be used for estimating neural network parameters from training data. For example, one or more of the following optimization techniques may be used: stochastic gradient descent (SGD), mini-batch gradient descent, momentum SGD, Nesterov accelerated gradient. Adagrad, Adadelta, RMSprop, Adaptive Moment Estimation (Adam), AdaMax, Nesterov-accelerated Adaptive Moment Estimation (Nadam), AMSGrad.
An illustrative implementation of a computer system 2000 that may be used in connection with any of the embodiments of the technology described herein is shown in
Computing device 2000 may also include a network input/output (I/O) interface 2040 via which the computing device may communicate with other computing devices (e.g., over a network), and may also include one or more user I/O interfaces 2050, via which the computing device may provide output to and receive input from a user. The user I/O interfaces may include devices such as a keyboard, a mouse, a microphone, a display device (e.g., a monitor or touch screen), speakers, a camera, and/or various other types of I/O devices.
The embodiments described herein can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software, or a combination thereof. When implemented in software, the software code can be executed on any suitable processor (e.g., a microprocessor) or collection of processors, whether provided in a single computing device or distributed among multiple computing devices. It should be appreciated that any component or collection of components that perform the functions described herein can be generically considered as one or more controllers that control the functions discussed herein.
The one or more controllers can be implemented in numerous ways, such as with dedicated hardware, or with general purpose hardware (e.g., one or more processors) that is programmed using microcode or software to perform the functions recited herein.
In this respect, it should be appreciated that one implementation of the embodiments described herein comprises at least one computer-readable storage medium (e.g., RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible, non-transitory computer-readable storage medium) encoded with a computer program (i.e., a plurality of executable instructions) that, when executed on one or more processors, performs the functions discussed herein of one or more embodiments. The computer-readable medium may be transportable such that the program stored thereon can be loaded onto any computing device to implement aspects of the techniques discussed herein. In addition, it should be appreciated that the reference to a computer program which, when executed, performs any of the functions discussed herein, is not limited to an application program running on a host computer. Rather, the terms computer program and software are used herein in a generic sense to reference any type of computer code (e.g., application software, firmware, microcode, or any other form of computer instruction) that can be employed to program one or more processors to implement aspects of the techniques discussed herein.
The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed herein. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein.
Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
Also, data structures may be stored in one or more non-transitory computer-readable storage media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements.
Also, various inventive concepts may be embodied as one or more processes, of which examples have been provided. The acts performed as part of each process may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, and/or ordinary meanings of the defined terms.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term).
The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising:” “having,” “containing,” “involving,” and variations thereof, is meant to encompass the items listed thereafter and additional items.
Having described several embodiments of the techniques described herein in detail, various modifications, and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The techniques are limited only as defined by the following claims and the equivalents thereto.
While some aspects and/or embodiments described herein are described with respect to certain brain conditions, these aspects and/or embodiments may be equally applicable to monitoring and/or treating symptoms for any suitable neurological disorder or brain condition. Any limitations of the embodiments described herein are limitations only of those embodiments and are not limitations of any other embodiments described herein.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 63/091,838, titled “BRAIN MONITOR,” filed Oct. 14, 2020, U.S. Provisional Application Ser. No. 63/094,218, titled “SMART NONINVASIVE TRANSCRANIAL ULTRASOUND SYSTEM,” filed Oct. 20. 2020, and U.S. Provisional Application Ser. No. 63/228,569, titled “METHODS AND APPARATUS FOR SMART BEAM-STEERING,” filed Aug. 2, 2021, all of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
63228569 | Aug 2021 | US | |
63094218 | Oct 2020 | US | |
63091838 | Oct 2020 | US |