The present disclosure pertains to adaptable user interfaces of a medical imaging system such as an ultrasound imaging systems, for example a user interface that adapts automatically based on prior usage of the user interface.
User interfaces (UI), particularly graphical user interfaces, are a critical aspect of the overall user experience for any operator of a medical imaging system, such as ultrasound imaging systems. Typically, users operate the medical imaging system in a specific way, which can vary between users based on several factors, for example, personal preference (e.g., follows or doesn't follow standard protocols, heavy time gain compensation user), geography, user type (e.g., physician, sonographer), and application (e.g., abdominal, vascular, breast). However, few, if any, current medical imaging systems on the market permit customization of the UI by the user, and none have UIs that adapt over time.
Systems and methods are disclosed that may overcome the limitations of current medical imaging system user interfaces by dynamically modifying (e.g., adapting, adjusting) the presentation of hard and/or soft controls based, at least in part, upon analysis of prior button usage, keystrokes, and/or control-sequencing patterns of one or more users (collectively, usage data). In some applications, the flow of ultrasound procedures may be simplified and more efficient for users over prior art fixed user interface (UI) systems.
As disclosed herein, a UI for a medical imaging system may include a dynamic button layout that allows a user the ability to customize button location as well to show/hide buttons and on what page buttons will appear. As disclosed herein, a processor or processors may analyze usage data, for example usage data stored in log files including logs of prior keystrokes and/or sequences of control selections, to determine a percentage usage of particular controls (e.g., buttons) and/or typical order of control usage. In some examples, the processor or processors may implement an artificial intelligence, machine learning, and/or deep learning model that has been trained, for example on previously-obtained log files, to analyze usage data (e.g., keystroke and control-sequencing patterns entered by users or user types for a given procedure). Based on the analysis, the processor or processors may adjust the UI dynamic button layout based on the output of the trained model.
According to at least one example of the present disclosure, a medical imaging system may include a user interface comprising a plurality of controls, each of the plurality of controls configured to be manipulated by a user for changing an operation of the medical imaging system, a memory configured to store usage data resulting from the manipulation of the plurality of controls, and a processor in communication with the user interface and the memory, wherein the processor is configured to receive the usage data, determine, based on the usage data, a first control of the plurality of controls associated with lower frequency of usage than a second control of the plurality of controls, and adapt the user interface based on the frequency of usage by reducing a visibility of the first control, increasing the visibility of the second control, or a combination thereof.
According to at least one example of the present disclosure, a medical imaging system may include a user interface comprising a plurality of controls configured to be manipulated by a user for changing an operation of the medical imaging system, a memory configured to store usage data resulting from the manipulation of the plurality of controls, and a processor in communication with the user interface and the memory, the processor configured to receive the usage data, receive an indication of a first selected control of the plurality of controls, wherein the first selected control is associated with a first function, determine, based at least in part on the usage data and the first function, a next predicted function, and following manipulation of the first control, adapt the user interface by changing the function of one of the plurality of controls to the next predicted function, increasing a visibility of the control configured to perform the next predicted function relative to other controls of the plurality of controls, or a combination thereof.
The following description of certain embodiments is merely exemplary in nature and is in no way intended to limit the invention or its applications or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the present system. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of the present system. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present system is defined only by the appended claims.
Medical imaging system users have expressed frustration at the inability to customize user interfaces (UI) of the medical imaging system. Although different users may vary significantly from each other in how they operate a medical imaging system, each user typically follows the same or similar pattern each time they use the medical imaging system, particularly for a same application (e.g., a fetal scan, echocardiogram in ultrasound imaging). That is, for a particular application, a user typically uses the same set of controls, performs the same tasks, and/or performs the same tasks in the same order each time. This is especially true when users are following imaging technician-driven workflow where the imaging technician (e.g., a sonographer) performs the imaging examination which is read at a later time by a reviewing physician (e.g., radiologist). This workflow-based exam is common in North America.
Users often adjust, or customize, system settings in order to optimize their workflow-based exam. Such customizations can improve the efficiency and quality of the exam for that particular user. Customizations, though, can be time-consuming and/or may be required to be re-entered each time the user initializes a particular system. The inventors have thus recognized that, in addition to, or as an alternative to, permitting users to perform their own customizations of the UI, the medical imaging system may be arranged to “learn” the preferences of the user and automatically adapt the UI to the user's preferences without the user being required to perform the customizations manually. Thus, substantial time and effort may be saved, and quality of the exam may be enhanced.
As disclosed herein, a medical imaging system may analyze and automatically adapt (e.g., adjust, change) the UI of the ultrasound imaging system based, at least in part, on usage data (e.g., keystrokes, patterns of button pushes) collected from one or more users of the medical imaging system. In some examples, the medical imaging system may fade less-used controls on a display. In some examples, the degree of fading may increase overtime until the controls are removed from the display. In some examples, less-used controls may be moved further down on a display and/or moved to a second or subsequent page of a menu of the UI. In some examples, highly used controls may be highlighted (e.g., appear brighter or in a different color than other controls). In some examples, the medical imaging system may infer which control the user will select next and highlight the control on the display and/or control panel. In some examples, the medical imaging system will alter the functionality of a soft control (e.g., button on a touch screen) or a hard control (e.g., switch, dial, slider) based on an inference of what control function the user will use next. In some examples, this analysis and adaptation may be provided for each individual user of the medical imaging system. Thus, the medical imaging system may provide a customized, adaptable UI for each user without requiring user manipulation of the system settings. In some applications, automatically adapting the UI may reduce exam time, improve efficiency, and/or provide ergonomic benefits to the user.
The examples disclosed herein are provided in reference to ultrasound imaging systems. However, this is for illustrative purposes only and the adaptable UI and features thereof disclosed herein may be applied to other medical imaging systems.
In some embodiments, the transducer array 114 may be coupled to a microbeamformer 116, which may be located in the ultrasound probe 112, and which may control the transmission and reception of signals by the transducer elements in the array 114. In some embodiments, the microbeamformer 116 may control the transmission and reception of signals by active elements in the array 114 (e.g., an active subset of elements of the array that define the active aperture at any given time).
In some embodiments, the microbeamformer 116 may be coupled, e.g., by a probe cable or wirelessly, to a transmit/receive (T/R) switch 118, which switches between transmission and reception and protects the main beamformer 122 from high energy transmit signals. In some embodiments, for example in portable ultrasound systems, the T/R switch 118 and other elements in the system can be included in the ultrasound probe 112 rather than in the ultrasound system base, which may house the image processing electronics. An ultrasound system base typically includes software and hardware components including circuitry for signal processing and image data generation as well as executable instructions for providing a user interface (e.g., processing circuitry 150 and user interface 124).
The transmission of ultrasonic signals from the transducer array 114 under control of the microbeamformer 116 is directed by the transmit controller 120, which may be coupled to the T/R switch 218 and a main beamformer 122. The transmit controller 120 may control the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 114, or at different angles for a wider field of view. The transmit controller 120 may also be coupled to a user interface 124 and receive input from the user's operation of a user control. The user interface 124 may include one or more input devices such as a control panel 152, which may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
In some embodiments, the partially beamformed signals produced by the microbeamformer 116 may be coupled to a main beamformer 122 where partially beamformed signals from individual patches of transducer elements may be combined into a fully beamformed signal. In some embodiments, microbeamformer 116 is omitted, and the transducer array 114 is under the control of the main beamformer 122 which performs all beamforming of signals. In embodiments with and without the microbeamformer 116, the beamformed signals of the main beamformer 122 are coupled to processing circuitry 150, which may include one or more processors (e.g., a signal processor 126, a B-mode processor 128, a Doppler processor 160, and one or more image generation and processing components 168) configured to produce an ultrasound image from the beamformed signals (e.g., beamformed RF data).
The signal processor 126 may be configured to process the received beamformed RF data in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation. The signal processor 126 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination. The processed signals (also referred to as I and Q components or IQ signals) may be coupled to additional downstream signal processing circuits for image generation. The IQ signals may be coupled to a plurality of signal paths within the system, each of which may be associated with a specific arrangement of signal processing components suitable for generating different types of image data (e.g., B-mode image data, Doppler image data). For example, the system may include a B-mode signal path 158 which couples the signals from the signal processor 126 to a B-mode processor 128 for producing B-mode image data.
The B-mode processor can employ amplitude detection for the imaging of structures in the body. The signals produced by the B-mode processor 128 may be coupled to a scan converter 130 and/or a multiplanar reformatter 132. The scan converter 130 may be configured to arrange the echo signals from the spatial relationship in which they were received to a desired image format. For instance, the scan converter 130 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format. The multiplanar reformatter 132 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, for example as described in U.S. Pat. No. 6,443,896 (Detmer). The scan converter 130 and multiplanar reformatter 132 may be implemented as one or more processors in some embodiments.
A volume renderer 134 may generate an image (also referred to as a projection, render, or rendering) of the 3D dataset as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.). The volume renderer 134 may be implemented as one or more processors in some embodiments. The volume renderer 134 may generate a render, such as a positive render or a negative render, by any known or future known technique such as surface rendering and maximum intensity rendering.
In some embodiments, the system may include a Doppler signal path 162 which couples the output from the signal processor 126 to a Doppler processor 160. The Doppler processor 160 may be configured to estimate the Doppler shift and generate Doppler image data. The Doppler image data may include color data which is then overlaid with B-mode (i.e. grayscale) image data for display. The Doppler processor 160 may be configured to filter out unwanted signals (i.e., noise or clutter associated with non-moving tissue), for example using a wall filter. The Doppler processor 160 may be further configured to estimate velocity and power in accordance with known techniques. For example, the Doppler processor may include a Doppler estimator such as an auto-correlator, in which velocity (Doppler frequency) estimation is based on the argument of the lag-one autocorrelation function and Doppler power estimation is based on the magnitude of the lag-zero autocorrelation function. Motion can also be estimated by known phase-domain (for example, parametric frequency estimators such as MUSIC, ESPRIT, etc.) or time-domain (for example, cross-correlation) signal processing techniques. Other estimators related to the temporal or spatial distributions of velocity such as estimators of acceleration or temporal and/or spatial velocity derivatives can be used instead of or in addition to velocity estimators. In some embodiments, the velocity and/or power estimates may undergo further threshold detection to further reduce noise, as well as segmentation and post-processing such as filling and smoothing. The velocity and/or power estimates may then be mapped to a desired range of display colors in accordance with a color map. The color data, also referred to as Doppler image data, may then be coupled to the scan converter 230, where the Doppler image data may be converted to the desired image format and overlaid on the B-mode image of the tissue structure to form a color Doppler or a power Doppler image. In some examples, the scan converter 130 may align the Doppler image and B-mode image
Outputs from the scan converter 130, the multiplanar reformatter 132, and/or the volume renderer 134 may be coupled to an image processor 136 for further enhancement, buffering and temporary storage before being displayed on an image display 138. A graphics processor 140 may generate graphic overlays for display with the images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor may be configured to receive input from the user interface 124, such as a typed patient name or other annotations. The user interface 124 can also be coupled to the multiplanar reformatter 132 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
The ultrasound imaging system 100 may include local memory 142. Local memory 142 may be implemented as any suitable non-transitory computer readable medium (e.g., flash drive, disk drive). Local memory 142 may store data generated by the ultrasound imaging system 100 including ultrasound images, log files including usage data, executable instructions, imaging parameters, training data sets, and/or any other information necessary for the operation of the ultrasound imaging system 100. Although not all connections are shown to avoid obfuscation of
As mentioned previously ultrasound imaging system 100 includes user interface 124. User interface 124 may include display 138 and control panel 152. The display 138 may include a display device implemented using a variety of known display technologies, such as LCD, LED, OLED, or plasma display technology. In some embodiments, display 138 may comprise multiple displays. The control panel 152 may be configured to receive user inputs (e.g., pre-set number of frames, filter window length, imaging mode). The control panel 152 may include one or more hard controls (e.g., buttons, knobs, dials, encoders, mouse, trackball or others). Hard controls may sometimes be referred to as mechanical controls. In some embodiments, the control panel 152 may additionally or alternatively include soft controls (e.g., GUI control elements, or simply GUI controls such as buttons and sliders) provided on a touch sensitive display. In some embodiments, display 138 may be a touch sensitive display that includes one or more soft controls of the control panel 152.
According to examples of the present disclosure, ultrasound imaging system 100 may include a user interface (UI) adapter 170 that automatically adapts the appearance and/or functionality of the user interface 124 based, at least in part, on usage of the ultrasound imaging system 100 by a user. In some examples, the UI adapter 170 may be implemented by one or more processors and/or application specific integrated circuits. The UI adapter 170 may collect usage data from the user interface 124. Examples of usage data include, but are not limited to, keystrokes, button pushes, other manipulation (e.g., selection) of hard controls (e.g., turning a dial, flipping a switch), screen touches, other manipulation of soft controls, menu selections and navigation, and voice commands. In some examples, additional usage data may be received such as geographical location of the ultrasound machine, type of ultrasound probe used, unique user identifier, type of exam, and/or object imaged by the ultrasound imaging system 100. In some examples, some additional usage data may be provided by a user via the user interface 124, image processor 136, and/or preprogrammed and stored in ultrasound imaging system 100 (e.g., local memory 142).
The UI adapter 170 may perform live capture and analysis of the usage data. That is, the UI adapter 170 may receive and analyze the usage data as the user is interacting with the ultrasound imaging system 100 through the user interface 124. In these examples, the UI adapter 170 may automatically adapt the user interface 124 based, at least in part, on the usage data while the user is interacting with the user interface 124. However, in some examples, the UI adapter 170 may automatically adapt the user interface 124 when the user is not interacting with the user interface 124 (e.g., a pause in the workflow, end of exam). Alternatively, or in addition to, live analysis, the UI adapter 170 may capture and store the usage data (e.g., as log files in local memory 142) and analyze the stored usage data at a later time. In some examples when usage data is analyzed later, the UI adapter 170 may automatically adapt the user interface 124, but these adaptations may not be provided to the user until the next time the user interacts with the ultrasound imaging system 100 (e.g., user starts next step in the workflow, next time user logs into ultrasound imaging system 100). Additional details of example adaptions of the user interface 124 the UI adapter 170 may perform are discussed with reference to
In some examples, the UI adapter 170 may include and/or implement any one or more machine learning models, deep learning models, artificial intelligence algorithms, and/or neural networks which may analyze the usage data and adapt the user interface 124. In some examples, UI adapter 170 may include a long short term (LSTM) model, deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an autoencoder neural network, or the like, to adapt the control panel 152 and/or display 138. The model and/or neural network may be implemented in hardware (e.g., neurons are represented by physical components) and/or software (e.g., neurons and pathways implemented in a software application) components. The model and/or neural network implemented according to the present disclosure may use a variety of topologies and learning algorithms for training the model and/or neural network to produce the desired output. For example, a software-based neural network may be implemented using a processor (e.g., single or multi-core CPU, a single GPU or GPU cluster, or multiple processors arranged for parallel-processing) configured to execute instructions, which may be stored in computer readable medium, and which when executed cause the processor to perform a trained algorithm for adapting the user interface 124 (e.g., determine most and/or least used controls, predict a next control selected by a user in a sequence, altering an appearance of controls shown on display 138, altering the function of a physical control on control panel 152). In some embodiments, the UI adapter 170 may implement a model and/or neural network in combination with other data processing methods (e.g., statistical analysis).
In various embodiments, the model(s) and/or neural network(s) may be trained using any of a variety of currently known or later developed learning techniques to obtain a model and/or neural network (e.g., a trained algorithm, transfer function, or hardware-based system of nodes) that is configured to analyze input data in the form of screen touches, keystrokes, control manipulations, usage log files, other user input data, ultrasound images, measurements, and/or statistics. In some embodiments, the model and/or neural network may be statically trained. That is, the model and/or neural network may be trained with a data set and deployed on the UI adapter 170. In some embodiments, the model and/or neural network may be dynamically trained. In these embodiments, the model and/or neural network may be trained with an initial data set and deployed on the ultrasound system 100. However, the model and/or neural network may continue to train and be modified based on inputs acquired by the UI adapter 170 after deployment of the model and/or neural network on the UI adapter 170.
Although shown within the user interface 124 in
In some embodiments, various components shown in
The processor 200 may include one or more cores 202. The core 202 may include one or more arithmetic logic units (ALU) 204. In some embodiments, the core 202 may include a floating point logic unit (FPLU) 206 and/or a digital signal processing unit (DSPU) 208 in addition to or instead of the ALU 204.
The processor 200 may include one or more registers 212 communicatively coupled to the core 202. The registers 212 may be implemented using dedicated logic gate circuits (e.g., flip-flops) and/or any memory technology. In some embodiments the registers 212 may be implemented using static memory. The register may provide data, instructions and addresses to the core 202.
In some embodiments, processor 200 may include one or more levels of cache memory 210 communicatively coupled to the core 202. The cache memory 210 may provide computer-readable instructions to the core 202 for execution. The cache memory 210 may provide data for processing by the core 202. In some embodiments, the computer-readable instructions may have been provided to the cache memory 210 by a local memory, for example, local memory attached to the external bus 216. The cache memory 210 may be implemented with any suitable cache memory type, for example, metal-oxide semiconductor (MOS) memory such as static random access memory (SRAM), dynamic random access memory (DRAM), and/or any other suitable memory technology.
The processor 200 may include a controller 214, which may control input to the processor 200 from other processors and/or components included in a system (e.g., control panel 152 and scan converter 130 shown in
The registers 212 and the cache memory 210 may communicate with controller 214 and core 202 via internal connections 220A, 220B, 220C and 220D. Internal connections may implemented as a bus, multiplexor, crossbar switch, and/or any other suitable connection technology.
Inputs and outputs for the processor 200 may be provided via a bus 216, which may include one or more conductive lines. The bus 216 may be communicatively coupled to one or more components of processor 200, for example the controller 214, cache memory 210, and/or register 212. The bus 216 may be coupled to one or more components of the system, such as display 138 and control panel 152 mentioned previously.
The bus 216 may be coupled to one or more external memories. The external memories may include Read Only Memory (ROM) 232. ROM 232 may be a masked ROM, Electronically Programmable Read Only Memory (EPROM) or any other suitable technology. The external memory may include Random Access Memory (RAM) 233. RAM 233 may be a static RAM, battery backed up static RAM, Dynamic RAM (DRAM) or any other suitable technology. The external memory may include Electrically Erasable Programmable Read Only Memory (EEPROM) 235. The external memory may include Flash memory 234. The external memory may include a magnetic storage device such as disc 236. In some embodiments, the external memories may be included in a system, such as ultrasound imaging system 100 shown in
More detailed explanations of examples of adaptations of a UI of an ultrasound imaging system based on usage data from one or more users according to examples of the present disclosure will now be provided.
The control panel 352 may include one or more hard controls a user may manipulate to operate the ultrasound imaging system 300. In the example shown in
As described herein, in some examples, control panel 352 may include a hard control 312 that has a variable function. That is, the function performed by the hard control 312 is not fixed. In some examples, the function of the hard control 312 is altered by commands executed by a processor of ultrasound imaging system 300, for example, UI adapter 170. The processor may alter the function of hard control 312 based, at least in part, on usage data received by the ultrasound imaging system 300 from a user. Based on an analysis of the usage data, the ultrasound imaging system 300 may predict the next function the user will select. Based on this prediction, the processor may assign the predicted next function to the hard control 312. Optionally, in some examples, the analysis of usage data and prediction may be performed by a different processor than a processor that adapts (e.g., changes) the function of hard control 312. In some examples, the hard control 312 may have an initial function assigned based on an exam type and/or a default setting programmed into the ultrasound imaging system 300. In other examples, the hard control 312 may have no function (e.g., inactive) prior to an initial input from the user. Although hard control 312 is shown as a button in
As described with reference to
A user may provide an input to the ultrasound imaging system to select a control, such as touching or otherwise manipulating one of the soft controls 400 and/or manipulating a hard control of a control panel (not shown in
A user may provide a second input to the ultrasound imaging system, such as touching or otherwise manipulating one of the soft controls 400 and/or manipulating a hard control of a control panel. In some examples, the user may touch soft control 402, for example, when the processor correctly predicted the next function selected by the user. Responsive, at least in part, to the user's input, the function of soft control 402 may be changed by the processor to a third function FUNC3 (e.g., annotate, calipers) as shown in panel 405. Again, the third function FUNC3 may be assigned based on analysis of usage data. For example, the function assigned as the third function FUNC3 may be different depending on whether the user used the second function FUNC2 (e.g., the processor made a correct prediction) or selected a different function (e.g., the processor made an incorrect prediction).
A user may provide a third input to the ultrasound imaging system, such as touching or otherwise manipulating one of the soft controls 400 and/or manipulating a hard control of a control panel. In some examples, the user may touch soft control 402. Responsive, at least in part, to the user's input, the function of soft control 402 may be changed by the processor to a fourth function FUNC4 (e.g., update, change depth) as shown in panel 407. Again, the fourth function FUNC4 may be assigned based on analysis of usage data. For example, the function assigned as the fourth function FUNC4 may be different depending on whether the processor provided correct predictions at panels 403 and 405. Although changes in functions of soft control 402 are shown for three user inputs, the function of soft control 402 may be altered for any number of user inputs. Furthermore, in some examples, the user inputs provided may be stored for future analysis by the processor for making predictions of the next function desired by the user during a subsequent exam.
By altering the function of one or more hard controls (e.g., hard control 312) and/or one or more soft controls (e.g., soft control 402), the user may keep using the hard or soft control for different functions during an exam. In some applications, this may reduce the user's need to search for a control for a desired function on a user interface (e.g., user interface 124, user interface 324). Reduced searching may reduce time and improve efficiency of the exam. In some applications, using a single hard control for multiple functions may improve ergonomics of an ultrasound imaging system (e.g., ultrasound imaging system 100, ultrasound imaging system 300).
In the example shown in
Optionally, in some examples, as shown in
By altering the appearance of lesser used soft controls, such as by fading, a user's attention may be more easily directed to the most frequently used soft controls. This may reduce the time the user searches for a desired control. By removing unused soft controls, clutter on a display of a UI may be reduced, which may make desired controls easier to find. However, merely altering the appearance of lesser used soft controls may be preferable to some users and/or applications because the layout of the UI is unchanged and the lesser used controls are still available for use.
In the example shown in
By altering the appearance of most frequently used soft controls, such as by highlighting, a user's attention may be more easily directed to the most frequently used soft controls. This may reduce the time the user searches for a desired control.
As described with reference to
As shown in panel 701, soft control 700a may initially be highlighted. The first soft control 700 to be highlighted may be based, at least in part, on a particular user that has logged in, a selected exam type, or a default function stored in the ultrasound imaging system (e.g., ultrasound imaging system 100 and/or ultrasound imaging system 300). Alternatively, in some examples, no soft control 700 may be highlighted at panel 701.
A user may provide an input to the ultrasound imaging system, such as touching or otherwise manipulating one of the soft controls 700 and/or manipulating a hard control of a control panel (not shown in
A user may provide a second input to the ultrasound imaging system, such as touching or otherwise manipulating one of the soft controls 700 and/or manipulating a hard control of a control panel. In some examples, the user may touch soft control 700d, for example, when the processor correctly predicted the next function desired by the user. Responsive, at least in part, to the user's input, the highlighted soft control may be changed by the processor, for example, soft control 700c, as shown in panel 705. Again, soft control 700c may be highlighted based on analysis of usage data. For example, the soft control highlighted in panel 705 may be different depending on whether the user used soft control 700d (e.g., the processor made a correct prediction) or selected a different soft control (e.g., the processor made an incorrect prediction).
A user may provide a third input to the ultrasound imaging system, such as touching or otherwise manipulating one of the soft controls 700 and/or manipulating a hard control of a control panel. In some examples, the user may touch soft control 700c. Responsive, at least in part, to the user's input, the highlighted soft control may be changed by the processor, such as soft control 700e, as shown in panel 707. Again, soft control 700e may be highlighted based on analysis of usage data. For example, the soft control highlighted at panel 707 may be different depending on whether the processor provided correct predictions at panels 703 and 705. Although changes in highlighting of soft controls 700 are shown for three user inputs, the highlighting of soft controls 700 may be altered for any number of user inputs. Furthermore, in some examples, the user inputs provided may be stored for future analysis by the processor for making predictions of the next function desired by the user during a subsequent exam.
By highlighting the soft control most likely to be used next by a user, the user may more quickly locate the desired soft control. Furthermore, in protocol-heavy regions, highlighting the soft control most likely to be used next may help prevent the user from inadvertently skipping a step in the protocol.
Although the examples shown in
In the example shown in
In some examples, soft controls 800 may be part of a menu that includes multiple pages as shown by panels 809 and 811 in
By moving more frequently used controls to the top of a display and/or to a first page of a menu, the more frequently used controls may be more visible and easier to find for a user. In some applications, the user may need to spend less time navigating through pages of menus to find a desired control. However, some users may dislike automated rearranging of the soft controls and/or find it disorienting. Accordingly, in some examples, the ultrasound imaging system may allow the user to provide a user input that disables this setting.
Although the example in
In some examples, an ultrasound imaging system may automatically adapt a user interface of the ultrasound imaging system not only based on inputs provided by a user, but also based on what object is being imaged. In some examples, a processor of the ultrasound imaging system, such as image processor 136, may identify anatomy currently being scanned by an ultrasound probe, such as ultrasound probe 112. In some examples, the processor may implement an artificial intelligence/machine learning model trained to identify anatomical features in the ultrasound images. Examples of techniques for identifying anatomical features in ultrasound images may be found in PCT Application PCT/EP2019/084534 filed on Dec. 11, 2019 and entitled “SYSTEMS AND METHODS FOR FRAME INDEXING AND IMAGE REVIEW”. The ultrasound imaging system may adapt the user interface based on the identified anatomical features, for example, by displaying soft controls for functions most commonly used when imaging the identified anatomical features.
Display 900 may provide ultrasound images acquired by an ultrasound probe of the ultrasound imaging system, such as ultrasound probe 112. Display 904 may provide soft controls for manipulation by a user to operate the ultrasound imaging system. However, in other examples (not shown in
On the right-hand side of
Although completely different organs, kidney and heart, were shown in the example in
Automatic detection of the anatomical features being imaged and dynamically adjusting the user interface may allow a user more efficient access to desired controls. Furthermore, for certain exams, such as fetal scans, different tools may be needed for different portions of the exam, so the UI may not be adequately adapted if based solely on exam type.
Although
Furthermore, other adaptations of the UI that do not directly involve the function, appearance, and/or arrangement of hard and/or soft controls may also be performed based on usage data. For example, a processor may adjust default values of the ultrasound imaging system to create a custom preset based, at least in part on usage data. Examples of default values that may be altered include, but are not limited to, imaging depth, 2D operation, chroma mapping settings, dynamic range, and graymap settings.
According to examples of the present disclosure, ultrasound imaging systems may apply one or more techniques for analyzing usage data to provide automatic adaptations of UIs of ultrasound imaging systems, such as the examples described with reference to
As disclosed herein, the ultrasound imaging system may receive and store usage data in a computer readable medium, such as local memory 142. Examples of usage data include, but are not limited to, keystrokes, button pushes, other manipulation of hard controls (e.g., turning a dial, flipping a switch), screen touches, other manipulation of soft controls (e.g., swiping, pinching), menu selections and navigation, and voice commands. In some examples, additional usage data may be received such as geographical location of the ultrasound system, type of ultrasound probe used (e.g., type, make, model), unique user identifier, type of exam, and/or what object is currently being imaged by the ultrasound imaging system. In some examples, usage data may be provided by a user via a user interface, such as user interface 124, a processor, such as image processor 136, the ultrasound probe (e.g., ultrasound probe 112), and/or preprogrammed and stored in ultrasound imaging system (e.g., local memory 142).
In some examples, some or all of the usage data may be written to and stored in computer readable files, such as log files, for later retrieval and analysis. In some examples, a log file may store a record of some or all of a user's interactions with the ultrasound imaging system. The log file may include time and/or sequence data such that the time and/or sequence of the different interactions the user had with the ultrasound imaging system may be determined. Time data may include a time stamp that is associated with each interaction (e.g., each keystroke, each button push). In some examples, the log file may store the interactions in a list in the order the interactions occurred such that the sequence of interactions can be determined, even if no time stamp is included in the log file. In some examples, the log file may indicate a particular user that is associated with the interactions recorded in the log file. For example, if a user logs into the ultrasound imaging system with a unique identifier (e.g., username, password), the unique identifier may be stored in the log file. The log file may be a text file, a spreadsheet, a database, and/or any other suitable file or data structure that can be analyzed by one or more processors. In some examples, one or more processors (e.g., UI adapter 170) of the ultrasound imaging system may collect the usage data and write the usage data to one or more log files, which may be stored in the computer readable medium. In some examples, log files and/or other usage data may be received by the imaging system from one or more other imaging systems. The log files and/or other usage data may be stored in the local memory. The log files and/or other usage data may be received by any suitable method, including wireless (e.g., BlueTooth, WiFi) and wired (e.g., Ethernet cable, USB device) methods. Thus, usage data from one or more users as well as from one or more imaging systems may be used for adapting the UI of the imaging system.
In some examples, the usage data (e.g., such as usage data stored in one or more log files) may be analyzed by statistical methods. A graphical depiction of an example of statistical analysis of one or more log files in accordance with examples of the present disclosure is shown in
In some examples, the output 1004 of processor 1000 may be used to adapt a user interface of the ultrasound imaging system (e.g., user interface 124, user interface 324). The user interface may be adapted by processor 1000 and/or another processor of the ultrasound imaging system. For example, the user interface may be adapted such that controls less likely to be selected are faded and/or removed from a display of the user interface as described with reference to
A graphical depiction of another example of statistical analysis of one or more log files in accordance with examples of the present disclosure is shown in
Based on the output 1104, the processor 1100 may calculate a most likely sequence of control selections by a user. As shown in output 1106, it may be determined that Button B has the highest probability of being selected by a user after Button A is selected by the user and Button C has the highest probability of being selected by the user after Button B has been selected by the user.
In some examples, the output 1106 of processor 1100 may be used to adapt a user interface of the ultrasound imaging system (e.g., user interface 124, user interface 324). The user interface may be adapted by processor 1100 and/or another processor of the ultrasound imaging system. For example, the user interface may be adapted such that a function of a hard or a soft control may be changed to the most likely desired function as described with reference to
The analysis of log files, including the examples of statistical analysis described with reference to
While statistical analysis of log files have been described, in some examples, one or more processors (e.g., UI adapter 170) of an ultrasound imaging system may implement one or more trained artificial intelligence, machine learning, and/or deep learning models (collectively referred to as AI models) for analyzing usage data whether in log files or other formats (e.g., live capture prior to storing in a log file). Examples of models that may be used to analyze usage data include, but are not limited to, decision trees, convolutional neural networks, and long short term memory (LSTM) networks. In some examples, using one or more AI models may allow for faster and/or more accurate analysis of usage data and/or faster adaptation of a user interface of the ultrasound imaging system responsive to the usage data. More accurate analysis of usage data may include, but are not limited to, more accurate predictions of a next selected control in a sequence, more accurate predictions of the most likely used controls for a particular user during a particular exam type, and/or more accurate determination of an anatomical feature being imaged.
In some examples, inputs to the neural network 1200 provided at the one or more input nodes 1202 may include log files, live capture usage data, and/or images acquired by an ultrasound probe. In some examples, outputs provided at output node 1216 may include a prediction of a next control selected in a sequence, a prediction of controls likely to be used by a particular user, controls likely to be used during a particular exam type, and/or controls likely to be used when a particular anatomical feature is being imaged. In some examples, outputs provided at output node 1216 may include a determination of an anatomical image currently being imaged by an ultrasound probe (e.g., ultrasound probe 112) of the ultrasound imaging system.
The outputs of neural network 1200 may be used by an ultrasound imaging system to adapt (e.g., adjust) a user interface of the ultrasound imaging system (e.g., user interface 124, user interface 324). In some examples, the neural network 1200 may be implemented by one or more processors of the ultrasound imaging system (e.g., UI adapter 170, image processor 136). In some examples, the one or more processors of the ultrasound imaging system (e.g., UI adapter 170) may receive an inference of the controls most used (e.g., manipulated, selected) by a user. Based on the inference, the processor may fade and/or remove lesser used controls (e.g., as described with reference to
In some examples, the processor may receive multiple outputs from neural network 1200 and/or multiple neural networks that may be used to adapt the user interface of the ultrasound imaging system. For example, the processor may receive an output indicating an anatomical feature currently being imaged by an ultrasound probe (e.g., ultrasound probe 112) of the ultrasound imaging system. The processor may also receive an output indicating controls most typically used by a user when the particular anatomical feature is imaged. Based on these outputs, the processor may execute commands to provide the most typically used controls on a display as described with reference to
The variable C, running across the top of cell 1300 is the state of the cell. The state of the previous LSTM cell Ct−1 may be provided to cell 1300 as an input. Data can be selectively added or removed from the state of the cell by cell 1300. The addition or removal of data is controlled by three “gates,” each of which includes a separate neural network layer. The modified or unmodified state of cell 1300 may be provided by cell 1300 to the next LSTM cell as Ct.
The variable h, running across the bottom of the cell 1300 is the hidden state vector of the LSTM model. The hidden state vector of the previous cell ht−1 may be provided to cell 1300 as an input. The hidden state vector ht−1 may be modified by a current input xt to the LSTM model provided to cell 1300. The hidden state vector may also be modified based on the state of the cell 1300 Ct. The modified hidden state vector of cell 1300 may be provided as an output ht. The output ht may be provided to the next LSTM cell as a hidden state vector and/or provided as an output of the LSTM model.
Turning now to the inner workings of cell 1300, a first gate (e.g., the forget gate) for controlling a state of the cell C includes a first layer 1302. In some examples, this first layer is a sigmoid layer. The sigmoid layer may receive a concatenation of the hidden state vector ht−1 and the current input xt. The first layer 1302 provides an output ft, which includes weights that indicate which data from the previous cell state should be “forgotten” and which data from the previous cell state should be “remembered” by cell 1300. The previous cell state Ct−1 is multiplied by ft at point operation 1304 to remove any data that was determined to be forgotten by the first layer 1302.
A second gate (e.g., the input gate) includes a second layer 1306 and a third layer 1310. Both the second layer 1306 and the third layer 1310 receive the concatenation of the hidden state vector ht−t and the current input xt. In some examples, the second layer 1306 is a sigmoid function. The second layer 1306 provides an output it which includes weights that indicate what data needs to be added to the cell state C. The third layer 1310 may include a tanh function in some examples. The third layer 1310 may generate a vector Ĉt that includes all possible data that can be added to the cell state from ht−t and xt. The weights it and vector Ct are multiplied together by point operation 1308 generates a vector that includes the data to be added to the cell state C. The data is added to the cell state C to get the current cell state Ct at point operation 1312.
A third gate (e.g., the output gate) includes a fourth layer 1314. In some examples, the fourth layer 1314 is a sigmoid function. The fourth layer 1314 receives the concatenation of the hidden state vector ht−1 and the current input xt and provides an output ot which includes weights that indicate what data of the cell state Ct should be provided as the hidden state vector ht of cell 1300. The data of the cell state Ct is turned into a vector by a tanh function at point operation 1316 and is then multiplied by ot by point operation 1318 to generate the hidden state vector/output vector ht. In some examples, the output vector ht may be accompanied by a confidence value, similar to the output of a convolutional neural network, such as the one described in reference to
As pictured in
In some examples where a processor of an ultrasound imaging system (e.g., UI adapter 170) implements an LSTM model, the current input xt may include data related to a control selected by a user and/or other usage data. The hidden state vector ht−1 may include data related to a previous prediction of a control selection by a user. The cell state Ct−1 may include data related to previous selections made by the user. In some examples, output(s) ht of the LSTM model may be used by the processor and/or another processor of the ultrasound imaging system to adapt a user interface of the ultrasound imaging system (e.g., user interface 124, user interface 324). For example, when ht includes predictions of a next control selected by a user, the processor may use the prediction to alter a function of a hard or a soft control as described with reference to
As described herein, the AI/machine learning models (e.g., neural network 1200 and LSTM including cell 1300) may provide confidence levels associated with one or more outputs. In some examples, a processor (e.g., UI adapter 170) may only adapt a UI of an ultrasound imaging system if the confidence level associated with the output is equal to or above a threshold value (e.g., over 50%, over 70%, over 90%, etc.). In some examples, if the confidence level is below the threshold value, the processor may not adapt the UI. In some examples, this may mean not fading, highlighting, removing, switching, and/or rearranging controls on a display. In some examples, this may mean not changing a function of a hard or soft control (e.g., maintaining the existing function).
Although a convolutional neural network and a LSTM model have been described herein, these AI/machine learning models have been provided only as examples, and the principles of the present disclosure are not limited to these particular models.
In the examples where the trained model 1420 is used as a model implemented or embodied by a processor of the ultrasound system (e.g., UI adapter 170), the starting architecture may be that of a convolutional neural network, a deep convolutional neural network, or a long short term memory model in some examples, which may be trained to determine least or most used controls, predict a next likely control selected, and/or determine an anatomical feature being imaged. The training data 1414 may include multiple (hundreds, often thousands or even more) annotated/labeled log files, images, and/or other recorded usage data. It will be understood that the training data need not include a full image or log file produced by an imaging system (e.g., a log file representative of every user input during an exam, an image representative of the full field of view of an ultrasound probe) but may include patches or portions of log files or images. In various examples, the trained model(s) may be implemented, at least in part, in a computer-readable medium comprising executable instructions executed by a processor or processors of an ultrasound system, e.g., UI adapter 170.
As described herein, an ultrasound imaging system may automatically and/or dynamically change a user interface of the ultrasound imaging system based, at least in part, on usage data from one or more users. However, in some examples, the ultrasound imaging system may allow a user to adjust the user interface. Allowing the user to adjust the user interface may be in addition to or instead of automatically and/or dynamically changing the user interface by the ultrasound imaging system (e.g., by one or more processors, such as the UI adapter 170).
As disclosed herein, an ultrasound imaging system may include a user interface that may be customized by a user. Additionally, or alternatively, the ultrasound imaging system may automatically adapt the user interface based on usage data of one or more users. The ultrasound imaging systems disclosed herein may provide a customized, adaptable UI for each user. In some applications, automatically adapting the UI may reduce exam time, improve efficiency, and/or provide ergonomic benefits to the user.
In various embodiments where components, systems and/or methods are implemented using a programmable device, such as a computer-based system or programmable logic, it should be appreciated that the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as “C”, “C++”, “C#”, “Java”, “Python”, and the like. Accordingly, various storage media, such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods. Once an appropriate device has access to the information and programs contained on the storage media, the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein. For example, if a computer disk containing appropriate materials, such as a source file, an object file, an executable file or the like, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
In view of this disclosure it is noted that the various methods and devices described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention. The functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
Although the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system. Certain additional advantages and features of this disclosure may be apparent to those skilled in the art upon studying the disclosure, or may be experienced by persons employing the novel system and method of the present disclosure. Another advantage of the present systems and method may be that conventional medical image systems can be easily upgraded to incorporate the features and advantages of the present systems, devices, and methods.
Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.
Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/066325 | 6/17/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63043822 | Jun 2020 | US |