A person may experience increased intracranial pressure (ICP) as a result of a primary injury, such as head trauma, an underlying health problem, or the like. Further, increased ICP may aggravate secondary injuries, such as ischemia, cerebral edema, hypoxia, or the like, that manifest after the primary injury. The optic nerve sheath may increase in size as a result of increased ICP. Accordingly, an enlargement of the optic nerve sheath may be indicative of elevated ICP.
This summary introduces concepts that are described in more detail in the detailed description. It should not be used to identify essential features of the claimed subject matter, nor to limit the scope of the claimed subject matter.
In an aspect, an ultrasound imaging system may include an ultrasound probe, a display, and one or more processors configured to control the ultrasound probe to acquire three-dimensional (3D) ultrasound data of an optic nerve sheath, determine a measurement of the optic nerve sheath based on the 3D ultrasound data, and control the display to display information related to the measurement of the optic nerve sheath.
In another aspect, a method may include controlling an ultrasound probe to acquire three-dimensional (3D) ultrasound data of an optic nerve sheath, determining a measurement of the optic nerve sheath based on the 3D ultrasound data, and controlling the display to display information related to the measurement of the optic nerve sheath.
In another aspect, an ultrasound imaging system may include a memory configured to store instructions, and one or more processors configured to execute the instructions to control an ultrasound probe to acquire three-dimensional (3D) ultrasound data of an optic nerve sheath, determine a measurement of the optic nerve sheath based on the 3D ultrasound data, and control a display to display information related to the measurement of the optic nerve sheath.
Some embodiments of the present disclosure provide an improved ultrasound imaging system that improves the accuracy, efficiency, speed, and reliability of ultrasound data acquisition and optic nerve sheath measurement, provide an improvement to the technical field of ultrasound imaging, and provide an improvement to patient safety and treatment.
Embodiments of the present disclosure will now be described, by way of example, with reference to the figures.
Ultrasound imaging systems and methods may use two-dimensional (2D) ultrasound imaging to aid medical personnel in determining a measurement of the optic nerve sheath. In this way, the ultrasound imaging systems and methods that utilize 2D ultrasound data might provide relatively rapid and non-invasive assessment of ICP through measurement of the optic nerve sheath. However, the ultrasound imaging systems and methods might involve optimal alignment of the ultrasound probe with respect to an anatomical plane of the optic nerve sheath in order to acquire a 2D ultrasound image that accurately reflects a true measurement of the optic nerve sheath.
Accordingly, ultrasound imaging systems and methods that utilize 2D ultrasound data might involve a skilled operator spending sufficient time and imaging resources to position the ultrasound probe while acquiring the 2D ultrasound data. Thus, ultrasound imaging systems and methods that utilize 2D ultrasound data might not be optimally efficient for being implemented at a point of care, a point of injury or in a medical vehicle during transport to a treatment location. Further, ultrasound imaging systems and methods that utilize 2D data might involve a skilled reviewer assessing 2D ultrasound images to determine a measurement of the optic nerve sheath. If the ultrasound probe was misaligned with respect to the optic nerve sheath during image acquisition, then the 2D ultrasonic images might result in optic nerve sheath measurements that have suboptimal accuracy, reliability, etc. Also, if the anatomy of the optic nerve sheath is tilted, curved, etc., then the 2D ultrasound images might not accurately reflect the true size of the optic nerve sheath, and might result in inaccurate optic nerve sheath measurements.
The ability of medical personnel to accurately identify elevated ICP and routinely monitor changes in ICP is valuable. Further, the ability to quickly and efficiently identify elevated ICP in a resource-limited setting might be valuable. For instance, medical personnel might desire to determine ICP at a point of care, a point of an injury or during transport in a medical vehicle in order to accurately determine an appropriate treatment location. As another example, medical personnel might desire to identify elevated ICP in a large number of casualties at a site of mass-casualty event in order to triage the victims.
Some embodiments of the present disclosure provide an improved ultrasound imaging system that improves the accuracy, efficiency, speed, and reliability of ultrasound data acquisition and optic nerve sheath measurement, provide an improvement to the technical field of ultrasound imaging, and provide an improvement to patient safety and treatment. For instance, some embodiments of the present disclosure provide an ultrasound imaging system that may acquire 3D ultrasound data of an optic nerve sheath, and determine a measurement of the optic nerve sheath using the 3D ultrasound data. Further, some embodiments of the present disclosure provide an ultrasound imaging system that may utilize artificial intelligence (AI) models for scan guidance, segmentation, measurement, and/or diagnosis. In this way, some embodiments of the present disclosure provide an ultrasound imaging system that improves the speed and efficiency of acquiring accurate ultrasound data that reflects the true anatomy of the optic nerve sheath, that reduces the sensitivity of alignment of the ultrasound probe with respect to the optic nerve sheath, that reduces the need for a skilled operator of the ultrasound probe, that improves the accuracy of optic nerve sheath measurement, and that permits optic nerves sheath measurement in a variety of time-limited and resource-limited settings.
The ultrasound probe 102 may be configured to acquire 3D ultrasound data. For example, the ultrasound probe 102 may be a linear probe, a phase array probe, a curved linear probe coupled with a position tracking system, a mechanically steered linear array transducer, a phased array transducer, a curved linear array transducer, an electronically steered 2D transducer array, an electronic 3D (e3D) probe, an electronic 4d (e4D) probe, a low profile wearable patch version of any of the foregoing probes, or the like. According to an embodiment, the ultrasound probe 102 may be configured to emit ultrasound signals suitable for ophthalmic imaging. The ultrasound probe 102 may be configured to generate ultrasound signals, emit the ultrasound signals towards a target location of a subject, receive echo ultrasound signals that are back-scattered from the target location of the subject, generate 3D ultrasound data based on the echo ultrasound signals, and output the 3D ultrasound data. The target location may be an optic nerve sheath, an optic nerve, a retina, or the like. The subject may be a person, an animal, a phantom, or the like.
The transmit beamformer 104 may be configured to apply delay times to electrical signals provided to the elements 108 to focus corresponding ultrasound signals at the target location. The transmitter 106 may be configured to transmit electrical signals to the elements 108 to drive the elements 108 to emit ultrasound signals towards the target location. The elements 108 may be configured to receive the electrical signals from the transmitter 106, convert the electrical signals into ultrasound signals, and emit the ultrasound signals towards the target location. The elements 108 may be configured to receive echo ultrasound signals that are back-scattered by the target location, convert the echo ultrasound signals into electrical signals, and provide the electrical signals to the receiver 110. The receiver 110 may be configured to receive electrical signals from the elements 108, and provide the electrical signals to the receive beamformer 112. The receive beamformer 112 may apply delay times to the electrical signals received from the elements 108.
The user input device 114 may be configured to receive a user input, and provide the user input to the processor 116. For example, the user input device 114 may be a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, a microphone, or the like. Additionally, or alternatively, the user input device 114 may be configured to sense information. For example, the user input device 114 may sense information from an electro-magnetic positioning system, an inertial measurement system, an accelerometer, a gyroscope, an actuator, or the like.
The processor 116 may be configured to perform the operations as described herein. For example, the processor 116 may be a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or the like. The processor 116 may be implemented in hardware, firmware, or a combination of hardware and software. The processor 116 may include one or more processors 116 configured to perform the operations described herein. For example, a single processor 116 may be configured to perform all of the operations described herein. Alternatively, multiple processors 116, collectively, may be configured to perform all of the operations described herein, and each of the multiple processors 116 may be configured to perform a subset of the operations descried herein. For example, a first processor 116 may perform a first subset of the operations described herein, a second processor 116 may be configured to perform a second subset of the operations described herein, etc.
The processor 116 may be configured to control the ultrasound probe 102 to acquire 3D ultrasound data. The processor 116 may be configured to control which of the elements 108 are active, and control the shape of a beam emitted from the ultrasound probe 102. The processor 116 may generate ultrasound images for display. For example, the processor 116 may generate B-mode images, color Doppler images, M-mode images, color M-mode images, or the like.
The display 118 may be configured to display information. For example, the display 118 may be a monitor, a light-emitting diode (LED) display, a cathode ray tube, a projector display, a touchscreen, tablet computer, mobile phone, or the like. The display 118 may display ultrasound images based on the 3D ultrasound data in real-time. For example, the display 118 may display the ultrasound images within one second, two seconds, five seconds, etc., of the 3D ultrasound data being acquired by the ultrasound probe 102.
The memory 120 may be configured to store information and/or instructions for use by the processor 116. The memory 120 may be a non-transitory computer-readable medium. For example, the memory 120 may be a random access memory (RAM), a read only memory (ROM), a flash memory, a magnetic memory, an optical memory, or the like. The memory 120 may be configured to store instructions that, when executed by the processor 116, cause the processor 116 to perform the operations described herein.
The memory 120 may store one or more AI model inferences. For example, the memory 120 may store the scan guidance AI model inference 122 that may be configured to determine a scan quality metric, the segmentation AI model 124 that may be configured to three-dimensionally segment an optic nerve sheath based on 3D ultrasound data, the measurement AI model inference 126 that may be configured to determine a measurement of the optic nerve sheath, and/or the diagnosis AI model inference 128 that may be configured to determine a diagnosis of a subject based on a measurement of the optic nerve sheath of the subject.
The communication interface 130 may be configured to enable the processor 116 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. For example, the communication interface 130 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a wireless fidelity (Wi-Fi) interface, a cellular network interface, or the like.
The server 132 may be configured to provide information to the processor 116 and/or the memory 120 via the communication interface 130. For example, the server 132 may be a server, a cloud server, a virtual machine, or the like.
The network 134 may be a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a cellular network, a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, or the like, and/or a combination of these or other types of networks.
The number and arrangement of the devices of the ultrasound imaging system 100 shown in
As shown in
According to an embodiment, and to assist the operator of the ultrasound probe 102 to acquire the 3D ultrasound data, the processor 116 may control the display 118 to display preview ultrasound images in real-time.
According to another embodiment, and to also assist the operator of the ultrasound probe 102 to acquire the 3D ultrasound data, the processor 116 may generate scan guidance information, and control the display 118 to display the scan guidance information to assist or guide the operator in positioning the ultrasound probe 102 to acquire the 3D ultrasound data for determining the measurement of the optic nerve sheath.
As shown in
As further shown in
According to an embodiment, the processor 116 may determine the scan quality metric based on one or more image quality properties of the positioning ultrasound data. The image quality properties may be a spatial resolution, a contrast resolution, probe motion parameter, a noise parameter, an artifact parameter, or the like. According to another embodiment, the processor 116 may determine the scan quality metric based on a degree of alignment between an anatomical structure of the ocular anatomy of the subject and an alignment reference. The anatomical structure may be a centerline of the optic nerve, an optic nerve, an optic nerve sheath, a landmark of the retina, or the like. The alignment reference may be a focal point of an ultrasound beam, a centerline of the acquisition volume for the 3D ultrasound data, or the like. According to another embodiment, the processor 116 may determine the scan quality metric based on the scan guidance AI model inference 122. For example, the processor 116 may input the positioning ultrasound data into the scan guidance AI model inference 122, and determine the scan quality metric based on an output of the scan guidance AI model inference 122.
As further shown in
According to an embodiment, the scan guidance information may include an ultrasound image.
According to an embodiment, the scan guidance information may be a preview ultrasound image. For example, the processor 116 may generate preview ultrasound images based on the positioning ultrasound data, and control the display 118 to display the preview ultrasound image in real-time. The preview ultrasound images may be single plane images, bi-plane images, three-plane images, multi-plane images, curved plane images, 3D renderings, or the like.
According to an embodiment, the scan guidance information may be an indicator of an image quality. For example, the indicator may be a discrete value of an image quality (e.g., 90%, 50%, 7, 10, etc.), a representation of an image quality (e.g., a variable number of bars, a variable brightness, a variable status, or the like), or the like. As an example, and as shown in
According to an embodiment, the scan guidance information may be an indicator of a level of alignment between an anatomical structure and an alignment reference. For example, the indicator may be a discrete value of a level of alignment (e.g., 90%, 50%, 7, 10, etc.), a representation of a level of alignment (e.g., a variable number of bars, a variable brightness, a variable status, or the like), or the like. As an example, and as shown in
According to an embodiment, the scan guidance information may be an indicator of a direction in which to move the ultrasound probe 102 to acquire improved, or optimal, 3D ultrasound data for determining a measurement of the optic nerve sheath. For example, the indicator may indicate whether to move the ultrasound probe 102 vertically, move the ultrasound probe 102 horizontally, tilt the ultrasound probe 102, rotate the ultrasound probe 102, or the like. As an example, and as shown in
The processor 116 may control the display 118 to display the scan guidance information, generate updated scan guidance information based on updated positioning ultrasound data as the operator moves the ultrasound probe 102 based on the scan guidance information to acquire the updated positioning ultrasound data, and control the display 118 to display the updated scan guidance information. In this way, an operator of the ultrasound probe 102 may ascertain whether a position of the ultrasound probe 102 is suitable for acquiring 3D ultrasound data for determining a measurement of the optic nerve sheath, and may move the ultrasound probe 102 to an improved, or optimal, position based on the scan guidance information indicating that the improved, or optimal, position exists. Although
Referring to
According to another embodiment, the processor 116 may control the ultrasound probe 102 to automatically acquire the 3D ultrasound data for determining a measurement of the optic nerve sheath. For instance, the processor 116 may control the ultrasound probe 102 to automatically acquire the 3D ultrasound data without having received a user input to acquire the 3D ultrasound data.
As shown in
As further shown in
As further shown in
As further shown in
Although
Referring to
As further shown in
As shown in
According to an embodiment, the processor 116 may determine a centerline of the optic nerve based on the 3D path of the optic nerve. The processor 116 may determine the centerline using an image analysis technique in association with the 3D ultrasound data. According to an embodiment, the processor 116 may control the ultrasound probe 102 to acquire 3D B-mode ultrasound data, and determine the centerline of the optic nerve based on the 3D B-mode ultrasound data. According to an embodiment, the processor 116 may control the ultrasound probe 102 to acquire 3D color flow Doppler ultrasound data, and determine the centerline of the optic nerve based on the 3D color flow Doppler ultrasound data. The technique to determine the 3D path of the optic nerve may include region growing segmentation, eigen analysis to identify tubular structures, or the like.
As further shown in
As further shown in
According to an embodiment, the processor 116 may three-dimensionally segment the optic nerve sheath using a segmentation technique in association with the 3D ultrasound data. The segmentation technique may include edge detection, region-based segmentation, clustering-based segmentation, surface models that deforms to edges, neural network-based segmentation, or the like. According to an embodiment, the processor 116 may generate plurality of ultrasound images based on the 3D ultrasound data, and three-dimensionally segment the optic nerve sheath using the ultrasound images.
According to another embodiment, the processor 116 may three-dimensionally segment the optic nerve sheath using the segmentation AI model inference 124. For example, the processor 116 may input the 3D ultrasound data into the segmentation AI model inference 124, and obtain the three-dimensionally segmented optic nerve sheath based on an output of the segmentation AI model inference 124. In this case, the segmentation AI model inference 124 may be configured to three-dimensionally segment the optic nerve sheath directly from the 3D ultrasound data.
According to another embodiment, the processor 116 may three-dimensionally segment the optic nerve sheath using a structural model. The structural model may be a circular cylinder, a curved cylinder having a smoothly varying radius, an elliptical cylinder, or the like. The processor 116 may use an image analysis technique and the structural model to three-dimensionally segment the optic nerve sheath. By using the structural model, the processor 116 may regularize, or adjust, the optic nerve sheath in the ultrasound image. For example, the processor 116 may generate an ultrasound image in which the optic nerve sheath appears substantially straight despite the physical optic nerve sheath being curved. The processor 116 may align the various planes with respect to the center of the optic nerve. That is, the intersection of the planes may be the center of the optic nerve. As another example, the processor 116 may control the display 118 to display a preview ultrasound image that in follows a 3D path of an optic nerve. In this case, a single plane image might not show the entirety of the optic nerve because of the curvature of the optic nerve. As such, by controlling the display 118 to display the ultrasound preview image corresponding to the curved plane, the processor 116 may facilitate the acquisition of more accurate ultrasound images for determining the measurement of the optic nerve sheath.
According to another embodiment, the processor 116 may segment the optic nerve sheath using a 2D ultrasound image. For example, the processor 116 may generate a 2D ultrasound image based on the 3D ultrasound data, and segment the optic nerve sheath using the 2D ultrasound image.
As further shown in
The processor 116 may determine a measurement plane based on the 3D path of the optic nerve. For example, the processor 116 may determine a measurement plane based on a centerline of the optic nerve, a curvature of the optic nerve, or the like. According to an embodiment, the processor 116 may determine a long-axis measurement plane that is planar with the centerline of the optic nerve.
As further shown in
According to an embodiment, the processor 116 may determine a measurement of the optic nerve sheath in a long-axis plane. For example, the processor 116 may determine a measurement of the optic nerve sheath orthogonally to the centerline of the optic nerve at a measurement position in a long-axis plane. The processor 116 may determine a distance between two portions of the optic nerve sheath, and determine a diameter, a radius, an area, a volume, etc., of the optic nerve sheath based on the distance.
According to another embodiment, the processor 116 may determine a measurement of the optic nerve sheath in a short-axis plane. For example, the processor 116 may determine a measurement of the optic nerve sheath orthogonally to the centerline of the optic nerve in a short-axis plane. In this case, the short-axis plane may correspond to the measurement position. The processor 116 may determine a distance between two portions of the optic nerve sheath, and determine a diameter, a radius, an area, a volume, etc., of the optic nerve sheath based on the distance. Additionally, or alternatively, the processor 116 may use a structural model (e.g., a circle, an ellipse, or the like) to determine the measurement of the optic nerve sheath. For example, the processor 116 may fit the structural model to correspond to the optic nerve sheath, and determine the measurement of the optic nerve sheath based on a measurement of the structural model. As shown in
As shown in
As further shown in
As further shown in
As further shown in
As further shown in
As further shown in
As further shown in
As further shown in
Although
As shown in
As further shown in
As further shown in
According to an embodiment, the processor 116 may apply a respective weight value to each of the plurality of measurements, and determine the measurement of the optic nerve sheath based on applying the respective weight values. The processor 116 may determine a weight value for a measurement based on a scan quality metric of the underlying 3D ultrasound data that was used to determine the measurement. Additionally, or alternatively, the processor 116 may determine a weight value based on a quality of the three-dimensionally segmented optic nerve sheath. Additionally, or alternatively, the processor 116 may determine a weight value based on predetermined weight values to apply to various respective measurement planes.
According to an embodiment, the processor 116 may determine the measurement based on a plurality of measurements corresponding to a plurality of measurement planes of a same axis view of the optic nerve sheath. For example, the processor 116 may determine the measurement based on a plurality of measurements in a plurality of measurement planes corresponding to a long-axis view of the optic nerve sheath. As another example, the processor 116 may determine the measurement based on a plurality of measurements in a plurality of measurement planes corresponding to a short-axis view of the optic nerve sheath. According to another embodiment, the processor 116 may determine the measurement based on a plurality of measurements corresponding to a plurality of measurement planes of different axis views of the optic nerve sheath. For example, the processor 116 may determine the measurement based on a measurement in a measurement plane corresponding to a long-axis view of the optic nerve sheath and a measurement in a measurement plane corresponding to a short-axis view of the optic nerve sheath.
Although
As shown in
As shown in
According to an embodiment, the information related to the measurement of the optic nerve sheath may be a value of a measurement of the optic nerve sheath. For example, the information related to the measurement may be a particular value of the measurement of the optic nerve sheath (e.g., the average value, the maximum value, etc.), a plurality of values for a plurality of measurement planes, or the like. As an example, and as shown in
According to another embodiment, the information related to the measurement of the optic nerve sheath may be an ultrasound image associated with the measurement. For example, the ultrasound image may correspond to the measurement plane in which the measurement was determined. The processor 116 may perform image processing on the ultrasound image to enhance contrast, enhance resolution, reduce speckle, or the like. An operator of the ultrasound imaging system 100 may provide a user input via the user input device 114 that selects a particular ultrasound image to be displayed. Based on the user input, the processor 116 may control the display 118 to display the particular ultrasound image. According to an embodiment, the ultrasound image may include an indicator. For example, the indicator may be an indicator of an anatomical structure, an indicator of the measurement position, or the like. Additionally, or alternatively, the ultrasound image may include the three-dimensionally segmented optic nerve sheath. An operator of the ultrasound imaging system 100 may provide a user input via the user input device 114 that modifies a segmentation of the optic nerve sheath. For example, the user input may include a point in a particular ultrasound image corresponding to a correction of the optic nerve sheath segmentation at that location. Based on the user input, the processor 116 may determine an updated measurement of the optic nerve sheath based on the modified segmentation, and control the display 118 to display the updated measurement of the optic nerve sheath.
According to another embodiment, the information related to the measurement of the optic nerve sheath may be an indicator of a change in a size of the optic nerve sheath from previous measurements. For example, the indicator of the change in the size of the optic nerve sheath may be a particular value by which the size of the optic nerve sheath has changed (e.g., +0.5 millimeters (mm), −0.1 mm, etc.). Additionally, or alternatively, the indicator of the change in the size of the optic nerve sheath may be an indicator identifying whether the size has increased, decreased, or remained substantially constant. For example, as shown in
According to another embodiment, the information related to the measurement of the optic nerve sheath may be a diagnosis of the subject. For example, the diagnosis may be a diagnosis of a primary injury of the subject, a secondary injury of the subject, a diagnosis of an elevated ICP level, or the like. The diagnosis may indicate a severity of the injury. For example, the diagnosis may indicate “normal,” “severe,” “mild,” or the like. As an example, and as shown in
Training data 1710 may include one or more of stage inputs 1720 and known outcomes 1730 related to the AI model inference to be trained. The stage inputs 1720 may be from any applicable source including text, images, data, values, comparisons, stage outputs, or the like. The known outcomes 1730 may be included for the AI model inferences generated based on supervised or semi-supervised training. An unsupervised AI model may not be trained using known outcomes 1730. Known outcomes 1730 may include known or desired outputs for future inputs similar to or in the same category as stage inputs 1720 that do not have corresponding known outputs.
The training data 1710 and a training algorithm 1750 may be provided to a training component 1760 that may apply the training data 1710 to the training algorithm 1750 to generate the AI model inference. According to an embodiment, the training component 1760 may be provided comparison results 1740 that compare a previous output of the corresponding AI model inference to apply the previous result to re-train the AI model inference. The comparison results 1740 may be used by the training component 1760 to update the corresponding AI model inference. The training algorithm 1750 may utilize AI networks and/or models including, but not limited to, a deep learning network (e.g., Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), Recurrent Neural Networks (RCNs), or the like), probabilistic models (e.g., Bayesian Networks, Graphical Models, or the like), classifiers (e.g., K-Nearest Neighbors, Naïve Bayes, or the like), discriminative models (e.g., Decision Forests, maximum margin methods, or the like), models specifically discussed in the present disclosure, or the like.
An AI model inference used herein may be trained and/or used by adjusting one or more weights and/or one or more layers of the AI model inference. For example, during training, a given weight may be adjusted (e.g., increased, decreased, removed, etc.) based on training data or input data. Similarly, a layer may be updated, added, or removed based on training data/and or input data. The resulting outputs may be adjusted based on the adjusted weights and/or layers.
In light of the foregoing, some embodiments of the present disclosure provide the ultrasound imaging system 100 that improves the accuracy, efficiency, speed, and reliability of ultrasound data acquisition and optic nerve sheath measurement, and provide an improvement to the technical field of ultrasound imaging. For instance, some embodiments of the present disclosure provide the ultrasound imaging system 100 that may acquire 3D ultrasound data of an optic nerve sheath, and determine a measurement of the optic nerve sheath using the 3D ultrasound data. Further, some embodiments of the present disclosure provide the ultrasound imaging system 100 that may utilize the scan guidance AI model inference 122, the segmentation AI model inference 124, the measurement AI model inference 126, and/or the diagnosis AI model inference 128. In this way, some embodiments of the present disclosure provide the ultrasound imaging system 100 that improves the speed and efficiency of acquiring accurate ultrasound data that reflects the true anatomy of the optic nerve sheath, that reduces the sensitivity of alignment of the ultrasound probe with respect to the optic nerve sheath, that reduces the need for a skilled operator of the ultrasound probe, that improves the accuracy of optic nerve sheath measurement, and that permits optic nerves sheath measurement in a variety of time-limited and resource-limited settings.
Embodiments of the present disclosure shown in the drawings and described above are example embodiments only and are not intended to limit the scope of the appended claims, including any equivalents as included within the scope of the claims. Various modifications are possible and will be readily apparent to the skilled person in the art. It is intended that any combination of non-mutually exclusive features described herein are within the scope of the present invention. That is, features of the described embodiments can be combined with any appropriate aspect described above and optional features of any one aspect can be combined with any other appropriate aspect. Similarly, features set forth in dependent claims can be combined with non-mutually exclusive features of other dependent claims, particularly where the dependent claims depend on the same independent claim. Single claim dependencies may have been used as practice in some jurisdictions require them, but this should not be taken to mean that the features in the dependent claims are mutually exclusive.