The present invention generally relates to user interface and methods for interacting with a computer system, and more particularly, to a touch-based user interface and method for interacting with a medical-imaging system.
In the field of medical imaging, prior to the digitization of medical imaging, medical-imaging users (e.g., Radiologists) would analyze physical film printed images in light boxes, and use physical devices such as magnifying glasses, rulers, grease pencils, and their hands to manipulate the physical printed medical images in order to interpret and diagnose the images. With the digitization of medical imaging, the physical film became a digital image, displayable on a computer monitor. A medical-imaging system became a computer application or collection of computer applications, which require a computer or computers to operate. At present, medical-imaging systems are interacted with through a keyboard and mouse. Commands to the medical-imaging system are invoked through keyboard and/or mouse interactions.
Requiring interactions to be performed using a keyboard and mouse is not as intuitive as working directly with objects using the hands or other physical objects (e.g. ruler, grease pencil). In addition, early computing systems were not powerful enough, nor feature-rich to warrant more efficient methods of human-computer interaction other than through keyboard and/or mouse inputs. However, with the availability of ever increasing computer power, and the increase in system capabilities, there is a need for additional techniques of interacting with computer systems such that human-computer interaction is not restricted by simple keyboard and/or mouse inputs. A move toward a much more natural, intuitive, efficient method of interaction is required.
In light of the foregoing background, exemplary embodiments of the present invention provide an improved apparatus and method for more intuitively and efficiently interacting with a computer system, such as a medical-imaging system. According to one aspect of exemplary embodiments of the present invention, an apparatus is provided that includes a processor configured to receive data representative of points on a touch-sensitive surface with which an object comes into contact to initiate and carry out a trace or movement interaction with the surface. In this regard, the trace is defined by a shape formed by the points, and the movement interaction is defined by movement reflected by the points. The processor is configured to determine, independent of a corresponding display or any media presented thereon, if the contact is initiated to carry out a trace or movement interaction based on the data. The contact is initiated to carry out a trace if contact of the object is made and the object is held substantially in place for a period of time, the determination being made. The processor is then configured to interpret the data based on the determination to thereby direct interaction with media presented on the corresponding display based on the interpretation, which may be effectuated by directing operation of a software application such as medical imaging software.
More particularly, for example, the processor may be configured to receive data to carry out a trace defined by an S-shape, F-shape, G-shape, K-shape or M-shape. In such instances, the software application may be directed to launch a study-worklist application when the trace is defined by an S-shape, launch a patient finder/search application when the trace is defined by an F-shape, direct an Internet browser to an Internet-based search engine when the trace is defined by an G-shape, launch a virtual keypad or keyboard when the trace is defined by an K-shape, or launch a measurement tool when the trace is defined by a M-shape.
Also, for example, the processor may be configured to receive data to carry out a trace defined by an A- or arrow shape, a C-shape or an E-shape, and interpret the data to direct a software application to annotate media presented on the corresponding display, including presentation of an annotations dialog based on the shape defining the trace. In addition, for example, the processor may be configured to receive data to carry out a trace defined by a checkmark-, J- or V-shape, and interpret the data to direct a software application to mark a study including the presented media with a status indicating interaction with the study has been completed. Further, for example, the processor may be configured to receive data to carry out a trace defined by a D-shape, and interpret the data to direct a software application to launch a dictation application. In another example, the processor may be configured to receive data to carry out a movement interaction defined by a two-handed, multiple-finger contact beginning at one side of the touch-sensitive surface and wiping to the other side of the surface, and interpret the data to direct a software application to close open media presented on the corresponding display.
In yet another example, the processor may be configured to receive data to carry out a movement interaction defined by a two-handed, single-finger contact whereby the finger of one hand is anchored substantially in place while dragging the finger of the other hand toward or away from the anchored finger in a substantially horizontal, vertical or diagonal direction. In these instances, the processor may be configured to interpret the data to direct a software application to interactively adjust a contrast of media presented on the corresponding display when the direction is substantially horizontal, adjust a brightness of media presented on the corresponding display when the direction is substantially vertical, or adjust both the contrast and brightness of media presented on the corresponding display when the direction is substantially diagonal. In similar instances, when the software application comprises medical imaging software, the processor may be configured to interpret the data to direct the medical imaging software to interactively adjust a window and/or level of media presented on the corresponding display. That is, the processor may be configured to direct the software to interactively adjust the window when the direction is substantially horizontal, adjust the level when the direction is substantially vertical, or adjust both the window and level when the direction is substantially diagonal.
In a further example, the processor may be configured to receive data to carry out a movement interaction defined by a single-handed, multiple-finger contact and dragging in the direction of another object, and interpret the data to direct a software application to perform an action with respect to the other object, such as by moving media presented on the corresponding display to another device or apparatus, software application or display, or directing an action with respect to another device or apparatus, software application or display. And additionally or alternatively, for example, the processor may be configured to receive data to carry out a movement interaction defined by a single or two-handed, multiple-finger contact and release. In this instance, the processor may be configured to interpret the data to direct a software application to open a menu of the software application, the menu being navigable by a user via single-finger contact and release relative to one of a number of options presented in the menu.
In addition to or in lieu of the foregoing, the processor may be further configured to receive data representative of points on the touch-sensitive surface with which a given object comes into contact to carry out an interaction with media presented on the corresponding display. The given object may comprise the same or a different object than that which comes into contact to initiate or carry out the trace or movement interaction. In this regard, the given object may be a first object (e.g., stylus) for effectuating a first type of interaction with the media, a second object (e.g., rectangular object) for effectuating a second type of interaction with the media, or a third object (e.g., closed-shaped object) for effectuating a third type of interaction with the media. The processor may be configured to determine if the given object is the first, second or third object based on the data representative of points on the touch-sensitive surface with which the given object comes into contact, and independent of separate user input. The processor may then be configured to enter a mode for interacting with the media based on the determination if the given object is the first, second or third object.
According to other aspects of exemplary embodiments of the present invention, a method and computer-readable storage medium are provided. Exemplary embodiments of the present invention therefore provide an improved apparatus, method and computer-readable storage medium for interacting with media presented on a display, or otherwise directing operation of a software application. As indicated above, and explained below, exemplary embodiments of the present invention may solve problems identified by prior techniques and provide additional advantages.
Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
a and 2b are schematic block diagrams of a touch-sensitive surface and a number of objects that may come into contact with that surface to effectuate a trace or movement interaction, according to exemplary embodiments of the present invention;
a-3h illustrate various exemplary traces that may be interpreted by the apparatus of exemplary embodiments of the present invention;
a-4g illustrate various exemplary movements that may be interpreted by the apparatus of exemplary embodiments of the present invention; and
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. For example, references may be made herein to directions and orientations including vertical, horizontal, diagonal, right and left; it should be understood, however, that any direction and orientation references are simply examples and that any particular direction or orientation may depend on the particular object, and/or the orientation of the particular object, with which the direction or orientation reference is made. Like numbers refer to like elements throughout.
Referring to
Generally, the apparatus of exemplary embodiments of the present invention may comprise, include or be embodied in one or more fixed electronic devices, such as one or more of a laptop computer, desktop computer, workstation computer, server computer or the like. Additionally or alternatively, the apparatus may comprise, include or be embodied in one or more portable electronic devices, such as one or more of a mobile telephone, portable digital assistant (PDA), pager or the like.
As shown in
In addition to the memory 14, the processor 12 may also be connected to at least one interface or other means for displaying, transmitting and/or receiving data, content or the like. In this regard, the interface(s) may include at least one communication interface 18 or other means for transmitting and/or receiving data, content or the like, such as to and/or from other devices and/or networks coupled to the apparatus. In addition to the communication interface(s), the interface(s) may also include at least one user interface that may include one or more wireline and/or wireless (e.g., Bluetooth) earphones and/or speakers, a display 20, and/or a user input interface 22. The user input interface, in turn, may comprise any of a number of wireline and/or wireless devices allowing the entity to receive data from a user, such as a microphone, an image or video capture device, a keyboard or keypad, a joystick, or other input device.
According to a more particular exemplary embodiment, the user input interface 22 may include one or more biometric sensors, and/or a touch-sensitive surface (integral or separate from a display 20). The biometric sensor(s), on the other hand, may include any apparatus (e.g., image capture device) configured to capture one or more intrinsic physical or behavioral traits of a user of the apparatus such as to enable access control to the apparatus, provide presence information of the user relative to the apparatus, or the like.
Referring to
The touch-sensitive surface 24 may be configured to detect and provide data representative of points on the surface with which one or more objects come into contact (points of contact 26), and as well as the size of each point of contact (e.g., through the area of the contact point, the shadow size of the contact point, etc.). These objects may include one or more fingers 28 of one or both hands 30 of a user (or more generally one or more appendages of a user), as well as one or more objects representing instruments otherwise designed for use in paper-based systems. Objects representing instruments may include, for example, a stylus 32, pen or other similarly-shaped object (e.g., felt-tipped cone-shaped object) representing a writing instrument (e.g., grease pencil), a rectangular object 34 representing a ruler, a closed-shaped (e.g., rectangular, circular, etc.) object 36 representing a magnifying glass, or the like.
In accordance with exemplary embodiments of the present invention, the touch-sensitive surface 24 may be configured to detect points of contact 26 of one or more objects (fingers 28, stylus 32, rectangular object 34, closed-shaped object 36, etc.) with the surface. An accompanying gesture-recognition engine (software application 16), then, may be configured to receive and interpret data representative of those points of contact, and interpret those points of contact (including concatenated points of contact representative of a trace 38 as in
Generally, the apparatus 10 including the touch-sensitive surface 24 and gesture-recognition engine (software application 16) are capable of distinguishing between a trace 38 (e.g. drawing the letter G), and a movement 40 or other interaction (e.g., interaction interpreted similar to a mouse-click and/or mouse-click-drag). In this regard, the user may touch the surface with a single finger (the surface detecting a point of contact 26), and hold that finger substantially in place for a period of time (e.g., 100 ms) (this interaction may be referred to herein as “delay-to-gesture” interaction). The gesture-recognition, then, may be configured to interpret the point of contact and holding in position of that point of contact as notification of a forthcoming single-finger gesture trace. The gesture-recognition engine may respond to the notification by directing removal or hiding of a cursor by a graphical user interface (GUI) presented on the display 20 of the apparatus. This, then, may indicate that the apparatus is ready to accept a single-finger trace. The next point of contact or consecutive points of contact, then, may be interpreted by the gesture-recognition engine as a trace instead of a movement interaction.
During a trace 38, the gesture-recognition engine may respond by drawing a faint outline of the trace on the display 20 as it is performed, such as to indicate to the user the trace being performed, and that a trace is being performed. During a movement 40, the gesture-recognition engine may respond by drawing a faint symbol on the display near the touch point(s) to indicate to the user the movement being performed, and that a particular movement is being performed, (e.g., a faint bullseye symbol may appear under the stationary finger during a window/level gesture, providing feedback to the user that the window/level gesture is being performed).
Reference will now be made to
As shown in
(a) An S-shaped trace (see
(b) An F-shaped trace (see
(c) A G-shaped trace (see
(d) A K-shaped trace (see
(e) Annotation-directed traces directing the medical-imaging software or other appropriate software to annotate an opened image or other document in one or more manners whereby, for example, a trace associated with a particular annotation may direct the appropriate software to set a displayed annotations dialog to a particular mode whereby, when one instance of the particular annotation is desired, the user may (after setting the annotations to the respective mode) contact the touch-sensitive surface to form the particular annotation; or when more than one instance is desired, the user may keep one finger in contact on a displayed annotation dialog (see, e.g.,
f) A checkmark-, J-, V- or other similarly-shaped trace (see
g) An M-shaped trace directing the medical-imaging software or other appropriate software to launch a measurement tool; or
h) A D-shaped trace directing the medical-imaging software or other appropriate software to launch a dictation application (with which the user may at least partially interact with a microphone of the apparatus's user input interface 22).
Similar to single-finger traces 38, single or multiple-finger (from one hand 30 or both hands 30a, 30b) movement 40 interactions may also be interpreted by the gesture-recognition engine (software application 16) into commands or other instructions for directing performance of one or more functions of the apparatus 10 associated with the respective movements. Movement interactions may be considered “interactive” in the sense that the interactions direct performance of functions during the interaction, and/or “command-based interactions” in the sense that the interactions direct performance of function(s) following the interaction (similar to single-finger trace commands). Referring now to
a) A single-finger touching (or other touch resulting in a similar-sized point of contact 26) and dragging in a horizontal or vertical direction within a particular area (e.g., along the right side of the touch-sensitive surface 24) to direct medical-imaging software or other appropriate software to scroll through or within one or more displayed images, documents or other windows in the respective direction (see
b) A two-handed, multiple-finger touching (fingers on each hand held together resulting in a points of contact 26′a, 26′b larger than single-finger touching) beginning at one (e.g., right) side of the touch-sensitive surface 24 and wiping to the other (e.g., left) side of the surface such as for a distance at least half the width of the surface to direct the medical-imaging software to close an open study (see
c) A single or two-handed, multiple-finger touching (fingers apart from one another resulting in single-finger-sized points of contact) and dragging apart or together to direct medical-imaging software or other appropriate software to interactively zoom in or out, respectively, within one or more displayed images, documents or other windows in the respective direction (see
d) A single-handed, multiple-finger touching (fingers held together) and dragging in any direction to direct medical-imaging software or other appropriate software to interactively pan within one or more displayed images, documents or other windows in the respective direction (see
e) A two-handed, single-finger touching whereby the user anchors the finger of one hand substantially in place, while dragging the finger of the other hand toward or away from the anchored finger in a horizontal and/or vertical direction, horizontal movement directing medical-imaging software or other appropriate software to interactively adjust the contrast (or more particularly, the “window” in the context of medical imaging) of one or more displayed images (see
f) A single or two-handed, multiple-finger touch (fingers apart from one another resulting in single-finger-sized points of contact) and release (from contact with the touch-sensitive surface 24) to direct medical-imaging software or other appropriate software to open a particular menu (see
g) A single-handed, multiple-finger touching (fingers held together) and dragging in the direction of another object (including a shortcut or other representation—e.g., icon—of the other object) to move or “throw” one or more displayed or otherwise active images, documents, software applications, actions or the like to the respective other object (see
h) A single-handed, multiple-finger touching (fingers held together) and dragging in any direction to direct medical-imaging software or other appropriate software to interactively rotate a three-dimensional volume or image in the respective direction, which rotation may or may not continue following the user's dragging of their fingers; this movement is similar to that of the interactive pan, but may be distinguished by the system in the images to which the respective movements are applicable, based on the relative speed of movement, or in a number of other manners.
In the preceding description of “throwing” images, documents, software applications, actions or the like to another object such as a system, this other system may be, for example, a fixed or portable electronic device of another user (e.g., radiologist, cardiologist, technologist, physician, etc.), location or department (e.g., ER). In various instances, another system may be a communications system having the capability to email the images and/or connect to a communications device (e.g., mobile phone) using Voice over IP (VoIP), for example, to connect the user with another user. In either of these instances, the images, documents, software applications, actions or the like may be thrown to the other user such as for a consultation.
Further relative to the “throwing” feature, the object to which the images, documents, software applications, actions or the like are thrown may be predefined, or user-selected in any of a number of different matters. In one exemplary embodiment in which the object comprises the fixed or portable electronic device of another user, location or department, or a communications system configured to communicate with another user, location or department (or their electronic device), the software application may be preconfigured or configurable with one or more destinations where each destination may refer to a user, user type, location or department. In this regard, each destination may be configured into the software with one or more properties. These properties may include or otherwise identify, for example, the users, user types (e.g., “attending” “referring,” etc. in the context of a physician user), locations or departments (e.g., “ER” in the context of a hospital department, etc.), as well as one or more contact numbers or addresses (e.g., telephone number, email address, hostname or IP address, etc.) for those users, user types, locations or departments. Additionally, these properties may include, for example, one or more actions such as email, text message, call, upload or download (or otherwise send/receive or transfer), or the like.
In various instances, then, implementing the “throwing” feature may include the user performing the aforementioned single-handed, multiple-finger touching (fingers held together) and dragging in the direction of another object, which may correspond to one or more destinations. In instances in which the software is configured for multiple destinations, the other object may be a shortcut directing the medical-imaging software or other appropriate software to display shortcuts or other representations—e.g., icons—of those destinations. The user may then select a desired destination, such as by touching the respective shortcut or speaking a word or phrase associated with the desired destination into a microphone of the apparatus's user input interface 22 (the software in this instance implementing or otherwise communicating with voice-recognition software). Alternatively, the software may initially display shortcuts to the destinations, where the shortcuts may be in different directions relative to the user's multiple-finger touching such that the user may drag their fingers in the direction of a desired destination to thereby select that destination. Additionally or alternatively, the parameters of the destinations may include a unique dragging direction (e.g., north, northeast, east, southeast, south, southwest, west, northwest) such that, even without displaying the shortcuts, the user may perform the multiple-finger touching (fingers held together) and dragging in the direction associated with the desired destination to thereby select that destination.
On receiving selection of a destination, the software may perform the action configured for the selected destination, and may perform that action with respect to one or more displayed or otherwise active images, documents, software applications, or the like. For example, the software may email, text message or upload an active image, document, software application or the like to the selected destination; or may call the destination (via an appropriate communications system). In such instances, the destination, or rather the destination device, may be configured to open or otherwise display the received email, text message, image, document, software application or the like immediately, on-demand or on response to a periodic polling for new information or data received by the device; or may notify any users in the vicinity of an incoming call.
In addition to or in lieu of interpreting contact between the touch-sensitive surface 24 and the user's fingers, as indicated above, the apparatus may be configured to interpret contact between the touch-sensitive surface and one or more objects representing instruments otherwise designed for use in paper-based systems (e.g., stylus 32 representing a writing instrument, rectangular object 34 representing a ruler, closed-shaped object 36 representing a magnifying glass, etc). More particularly, for example, points of contact between the touch-sensitive surface and one or more of these objects may be interpreted to direct medical-imaging software or other appropriate software into a respective mode of operation whereby the respective objects may function in a manner similar to their instrumental counterparts. In this regard, the apparatus may be configured to identify a particular object based on its points of contact (and/or size of those points of contact) with the touch-sensitive surface, and direct the respective application into the appropriate mode of operation. For example, placing the stylus into contact with the touch-sensitive surface may direct the medical-imaging software or other appropriate software into an annotation mode whereby subsequent strokes or traces made with the stylus may be interpreted as electronic-handwriting annotations to displayed images, documents or the like. Also, for example, placing the rectangular object (ruler) into contact with the touch-sensitive surface may direct the medical-imaging software or other appropriate software into a measurement mode whereby the user may touch and release (from contact with the touch-sensitive surface 24) on the ends of the object to be measured to thereby direct the software to present a measurement of the respective object. And placing the closed-shaped object (magnifying glass) into contact with the touch-sensitive surface may direct the medical-imaging software or other appropriate software into a magnification mode whereby an image or other display underlying the closed-shaped object may be magnified.
To further illustrate exemplary embodiments of the present invention, consider the context of a user (e.g., radiologist, cardiologist, technologist, physician, etc.) interacting with a workstation (apparatus 10) operating medical-imaging software (software applications 16) to display and review medical images, and form a diagnosis based on those images. In this exemplary situation, the touch-sensitive surface 24 and display 20 of the workstation are configured to form a touch-sensitive display.
The user may begin interaction with the workstation by logging into the workstation, such as via one or more biometric sensors in accordance with an image-recognition technique. Once logged in, the user may perform a delay-to-gesture interaction to indicate a forthcoming trace, and thereafter touch the touch-sensitive display and trace an “S” character (S-shaped trace) to direct the software to recall the list of patients' images to analyze (the “worklist” and/or “studylist”). See
If the user touches an image on the touch-sensitive display with two fingers and slides those two fingers apart from one another, the software zooms in on the image; or slides those two fingers towards one other, the software zooms out on the image. See
If the user touches an image on the touch-sensitive display with two fingers and keeps one finger stationary while the other finger moves in a relative horizontal and/or vertical (horizontal and vertical collectively a diagonal) direction to the stationary finger, the window and/or level of the image is interactively adjusted. See
If the user touches the touch-sensitive display with a number of fingers and sweeps the user's hand up and to the right across the touch-sensitive display, the patient's images are “thrown” and/or passed to another system, such as a communications system having the capability to email the images and/or connect to a communications device to, in turn, connect the user with another user for a consultation. See
Regardless of the interactions made between the user and the workstation, after the user has completed work on the respective patient's images, the user may touch the touch-sensitive display with one finger and trace a “checkmark” on the touch-sensitive display (following a delay-to-gesture interaction) so as to direct the software to mark the patient's images complete and “reported.” See
According to one aspect of the present invention, all or a portion of the apparatus of exemplary embodiments of the present invention, generally operates under control of a computer program. The computer program for performing the methods of exemplary embodiments of the present invention may include one or more computer-readable program code portions, such as a series of computer instructions, embodied or otherwise stored in a computer-readable storage medium, such as the non-volatile storage medium.
It will be understood that each step of a method according to exemplary embodiments of the present invention, and combinations of steps in the method, may be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions specified in the step(s) of the method. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement steps of the method. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing steps of the method.
Accordingly, exemplary embodiments of the present invention support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each step or function, and combinations of steps or functions, can be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
Many modifications and other embodiments of the invention will come to mind to one skilled in the art to which this invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. It should therefore be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
The present application claims priority to U.S. Provisional Patent Application No. 60/989,868, entitled: Touch-Based User Interface for a Computer System and Associated Gestures for Interacting with the Same, filed on Nov. 23, 2007, the content of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60989868 | Nov 2007 | US |