Many existing vendors offer software that creates and displays images, including the display of color or grayscale images that appear to be three dimensional. These images are typically based on data describing a volume of material or human tissue. In the field of medical imaging, devices such as CT, MRI, PET, and Ultrasound can generate data describing a volume of human or animal tissue. It is common for caregivers to display these volumes in a manner such that one or more images appear to be three dimensional using techniques such as volume rendering and surface shading. In addition, such software many enable the user to perform multi-planar reconstructions, maximum intensity pixel displays, or display grayscale or color slabs of various thickness and orientations.
When faced with a display of a three dimensional image, the user may want to rotate the image in any one of three axes or combinations thereof. Often, this rotation is controlled by an input device such as a mouse. For example, depression of a left mouse button combined with movement of the mouse from left to right might control rotation of the image from left to right (rotation about y axis), similar movement of the mouse away from or toward the user might control tilting of the image from toward or away from the perspective of the user (rotation about x axis) and some other sweeping movement of the mouse around the perimeter of the image in a near circular motion might control rotation of the image about the z axis. However, such mouse movements may be ambiguous, so that a user intending to rotate an image in the z axis may accidentally instead cause a rotation in the x axis (possibly in combination with movement in the z and/or y axes). Furthermore, a mouse may control other actions, such as cropping of the image in various planes, so that mouse movements intended to cause rotation may result in inadvertent cropping and other actions.
The systems and methods of the present disclosure may provide, among other features, easy-to-learn, efficient, and/or unambiguous methods for controlling rotation and/or other manipulation of multi-dimensional, for example, 2D (two-dimensional) and/or 3D (three-dimensional), images and/or objects. The systems and methods may be used for any type of image display/manipulation on a wide variety of computer systems and coupled displays including personal computers with monitors, phones, tablets, and televisions. In general, a user may select a particular rotation plane (for example, rotation only in x axis) by placement of a cursor, or touch of a finger, over a certain portion of the image such that subsequent movements of the mouse (or other input device) result in only rotations in that particular plane, and unwanted rotations and/or other manipulations in other planes do not occur. In this way, the user can more precisely control rotations of the 3D image and/or object.
In an embodiment, a tangible computer readable medium is described that stores software instructions configured for execution by a computing system having one or more hardware processors in order to cause the computing system to perform operations comprising displaying a 3D medical object on a display of the computing system; receiving an first input from a user of the computing system at a particular location of the display, the first input comprising a touch input or a mouse click at the particular location and indicating initiation of a rotation function; accessing rotation rules associated with the 3D medical object, the rotation rules indicating planes of rotation available for rotating the 3D medical object based on the particular location of the first input; in response to determining that the particular location is to a side of the display, limiting rotation of the 3D medical object to rotations about a horizontal axis of the 3D medical object, such that the computing system does not allow rotation of the 3D medical object about any other axis until the rotation function is released; in response to determining that the particular location is to a top or bottom of the display, limiting rotation of the 3D medical object to rotations about a vertical axis of the 3D medical object, such that the computing system does not allow rotation of the 3D medical object about any other axis until the rotation function is released; in response to determining that the particular location is near the center of the display, limiting rotation of the 3D medical object to rotations about both the horizontal and vertical axes, such that the computing system does not allow rotation of the 3D medical object about any other axis until the rotation function is released; and receiving a second input from the user in order to initiate rotation of the 3D medical object about one or more of the horizontal and vertical axes.
According to an aspect, the tangible computer readable medium may further comprise, in response to determining that the particular location is to a side of the display, displaying one or more horizontal guide lines on the display, wherein the horizontal guide lines correspond to the horizontal axis; in response to determining that the particular location is to the top or bottom of the display, displaying one or more vertical guide lines on the display, wherein the vertical guide lines correspond to the vertical axis; and in response to determining that the particular location is near the center of the display, displaying one or more horizontal and vertical guide lines on the display, wherein the horizontal guide lines correspond to the horizontal axis and the vertical guide lines correspond to the vertical axis.
According to another aspect, the tangible computer readable medium may further comprise adjusting characteristics of the 3D medical object based at least in part on the rotation function.
In another embodiment, a computer-implemented method of manipulating a multi-dimensional object in an electronic environment is described comprising, as implemented by one or more computer systems comprising computer hardware and memory, the one or more computer systems configured with specific executable instructions, providing to a user, on an electronic display, a multi-dimensional object; receiving, from the user, a rotation selection input; determining a restriction on the manipulation of the multi-dimensional object based at least in part on the rotation selection input; receiving, from the user, an object manipulation input; and manipulating the multi-dimensional object based at least in part on both the object manipulation input and the restriction on the manipulation.
According to an aspect, the computer-implemented method may further comprise, in response to an input from the user, displaying, on the electronic display, one or more guide lines indicating one or more available rotation selection inputs, wherein the received rotation selection input is selected from the one or more available rotation selection inputs.
According to another aspect, the input from the user comprises at least one of: movement of a cursor or touching of the electronic display near the multi-dimensional object, movement of a cursor or touching of the electronic display on the multi-dimensional object, movement of a cursor or touching of the electronic display near one or more of the guide lines, movement of a cursor or touching of the electronic display on one or more of the guide lines, and movement of a cursor to or touching of a particular portion of the electronic display.
According to yet another aspect, displaying the one or more guide lines comprises determining boundaries of the multi-dimensional object; and displaying the one or more guide lines based on the determined boundaries of the multi-dimensional object, wherein the one or more guide lines do not overlap with the multi-dimensional object.
According to another aspect, the characteristics of the one or more guide lines are dynamically adjusted to allow the user to easily recognize the one or more guide lines, wherein the characteristics comprise at least one of a position, a spacing, and a thickness.
According to yet another aspect, the guide lines are removed from the electronic display in response to receiving the object manipulation input from the user.
According to another aspect, the rotation selection input comprises at least one of: placing a cursor or touching the electronic display at a particular portion of the multi-dimensional object, placing a cursor or touching the electronic display at one or more of the guide lines, placing a cursor or touching the electronic display at an arced icon, pressing a button on a mouse while placing a cursor in proximity to one or more of the guide lines, touching a particular portion of the electronic display.
According to yet another aspect, the restriction on the manipulation of the multi-dimensional object comprises at least one of: allowing rotation of the multi-dimensional object on only one particular axis, and allowing rotation of the multi-dimensional object on only two particular axes.
According to another aspect, receiving an object manipulation input comprises the user sliding a finger from one location on the electronic display to another location on the electronic display.
According to yet another aspect, the multi-dimensional object comprises a 3D object, wherein manipulating the 3D object comprises, in response to the user touching the electronic display on a side of the display and sliding the finger vertically, rotating the 3D object on an x axis; in response to the user touching the electronic display on a top or bottom of the display and sliding the finger horizontally, rotating the 3D object on a y axis; and in response to the user touching the electronic display near a middle and sliding the finger in any direction, rotating the 3D object on at least one of both the x axis and the y axis, and a z axis, wherein the rotation of the 3D object is proportional to a distance the finder is slid.
According to another aspect, receiving an object manipulation input comprises the user moving an electronic indicator from one location on the electronic display to another location on the electronic display.
According to yet another aspect, the multi-dimensional object comprises a 3D object, wherein manipulating the 3D object comprises, in response to the user making a selection with the electronic indicator on a side of the display and moving the electronic indicator vertically, rotating the 3D object on an x axis; in response to the user making a selection with the electronic indicator on a top or bottom of the display and moving the electronic indicator horizontally, rotating the 3D object on a y axis; in response to the user making a selection with the electronic indicator near a middle and moving the electronic indicator in any direction, rotating the 3D object on at least one of both the x axis and the y axis, and a z axis; and in response to the user selecting an icon with the electronic indicator and moving the electronic indicator in any direction, rotating the 3D object on at least one of both the x axis and the y axis, and a z axis, wherein the rotation of the 3D object is proportional to a distance the electronic indicator is moved.
According to another aspect, the electronic indicator comprises at least one of a mouse pointer and a cursor.
According to yet another aspect, receiving an object manipulation input comprises the user touching the electronic display at two or more locations, and manipulating the multi-dimensional object comprises, in response to the user touching the electronic display in two places, and sliding the two places in a substantially same direction, translating the multi-dimensional object; in response to the user touching the electronic display in two places, and sliding the two places in substantially opposite directions, changing the size of the multi-dimensional object; and in response to the user touching the electronic display in two places, and rotating the two places, rotating the multi-dimensional object.
According to another aspect, manipulating the multi-dimensional object includes at least one of adjusting characteristics of the multi-dimensional object and adjusting viewing properties of the multi-dimensional object, wherein viewing properties include at least one of a window level and a window width.
According to yet another aspect, manipulating the multi-dimensional object includes rotating the multi-dimensional object along particular axes, wherein the particular axes are defined based on correlation with characteristics of the multi-dimensional object.
In yet another embodiment, a computer system is described comprising one or more hardware processors in communication with a computer readable medium storing software modules including instructions that are executable by the one or more hardware processors, the software modules including at least: a user interface module configured to display a multi-dimensional object on an electronic display; a user input module configured to receive a first input from a user at a particular location of the electronic display, the first input comprising a mouse click or touch input at the particular location and indicating initiation of a rotation function; an object rotation module configured to access rotation rules associated with the multi-dimensional object, the rotation rules indicating planes of rotation available for rotating the multi-dimensional object based on the particular location of the first input, the object rotation module further configured to: in response to determining that the particular location is near a vertical guide line, limiting rotation of the multi-dimensional object to rotations about a horizontal axis of the multi-dimensional object, such that the object rotation module does not allow rotation of the multi-dimensional object about any other axis until the rotation function is released; in response to determining that the particular location is near a horizontal guide line, limiting rotation of the multi-dimensional object to rotations about a vertical axis of the multi-dimensional object, such that the object rotation module does not allow rotation of the multi-dimensional object about any other axis until the rotation function is released; in response to determining that the particular location is near a particular icon, limiting rotation of the multi-dimensional object to rotations about an axis perpendicular to a surface of the electronic display, such that the object rotation module does not allow rotation of the multi-dimensional object about any other axis until the rotation function is released; and receiving via the user input module a second input from the user in order to initiate rotation of the multi-dimensional object about one or more of the horizontal, vertical, and perpendicular axes.
The following aspects of the disclosure will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings.
The systems and methods of the present disclosure may provide, among other features, easy-to-learn, efficient, and/or unambiguous methods for controlling rotation and/or other manipulation of multi-dimensional (for example, 2D and/or 3D) images and/or objects. Although the description and examples discussed herein relate to medical images, the systems and methods could be used for any type of image display/manipulation on a wide variety of computer systems and coupled displays including personal computers with monitors, phones, tablets, and televisions. Particular input mechanisms are discussed herein with reference to various example embodiments. However, other input mechanisms are usable. For example, any examples discussed with reference to touch screen inputs may also be implemented using inputs from a mouse, voice commands, gestures, and/or any other type of input. Similarly, any examples discussed with reference to mouse-based inputs may also be implemented using touch inputs from a touch-sensitive device, voice commands, gestures, and/or any other type of input.
In one embodiment, the computing system 150 (discussed in detail below with reference to
As noted above, any other visual cues (for example, other than the illustrated guide lines 210, 310 and guide circle 410) may be used in other embodiments. Additionally, other forms of cues may be provided, such as audible or tactile feedback, to indicate a selected rotation axis.
In one embodiment, rotating the image does not require the user to precisely move the mouse along the path of the various vertical, horizontal, or circular guides. For example, in one embodiment, with rotation around the y axis selected (for example,
User preferences or system preferences may be set to determine when the guide lines appear relative to the position of the cursor. For example, moving the mouse to within 2, 5, or 7 pixels, or some percentage of an image pixel height or width, of midline might cause the guide line to appear. In addition, there may be user preferences controlling the appearance of the guide lines (color, thickness, style, shape, arrows, etc.) or other icons that appear in relation to this invention.
In one embodiment, if the mouse is moved to the intersection point or points that would activate the pop-up of two lines or a line plus circle, the user may control two axes of rotation at once.
Therefore, the systems and methods discussed herein provide a simple, intuitive and unambiguous method for rotating images along three axes while retaining quick access to any other tool(s) or function(s) previously in use.
In the embodiment illustrated in
In one embodiment, the user can control which axes are available for rotation of the 3D object based on a screen position from which rotation is initiated. In this embodiment, the screen position may be related to (or referenced from) a display frame displayed on the screen rather than the entire screen itself. For example, a screen might contain two or more display frames which display 3D images. By touching within a display frame the user may both indicate the active display frame and select the axis of rotation, as above, by first touching a position along the top, bottom, or side of the display frame.
In one embodiment, rather than choosing the restricted axis of rotation by first touching the top, bottom, or side of a display frame or screen, the user may choose the restricted axis of rotation by touching adjacent to an object. For example, touching the screen to the left or right of the object may indicate that rotation is to be restricted to the x axis, and touching above or below an object may indicate that rotation is to be restricted to the y axis.
While the systems and methods discussed herein refer to rotations about and x axis, y axis, and/or the z axis, in other embodiments rotations may be about other axes. For example, in one embodiment a user can select an axis of rotation that does not directly aligned with an X, Y, or Z axis of the image or object. Similarly, in some embodiments the software may be configured to define axes that correlate with and/or are otherwise related to characteristics of an object to be rotated. For example, in one embodiment the available axes of rotation may vary based on the type of 3D volume containing the object or objects being viewed, such as an MRI, CT, or ultrasound scanners. In another embodiment, the axes of rotation may be related to a structure within the 3D volume. For example, a radiologist viewing a 3D display of a patient's spine from a volumetric CT scan may prefer that rotations are performed about a long axis of the spine for the purpose of efficiently interpreting the exam. Due to asymmetric patient positioning or anatomical asymmetry within the patient, the patent's spine may be obliquely oriented with regard to the acquired imaging volume. Thus, in an embodiment, the imaging volume may be rotated so that the x, y, and z axes are aligned relative to the patient's anatomy, such as to allow the patient's spine be aligned along the y axis. In other embodiments, the imaging volume may not be rotated, but a new rotational axis (e.g., not the x, y, or z axis) may be determined (either manually by the user or automatically by the computing system) in order to allow rotations about a different axis (such as the long axis of the patient's spine). In addition, in some embodiments, the x, y, and/or z axes may relate to a collection of objects, a single object, a single object within a collection of objects, and/or a camera angle.
In this embodiment, the image can be rotated in both the x and y axes by initiating rotation with a finger touch, or cursor movement (or other predefined input), within a central region of the user interface, such as is illustrated in
In other embodiments, selection of the axis-specific rotation functions may be performed in different manners. For example, a first gesture may be performed on a touch sensitive display in order to select rotation around only the x axis (for example,
In the embodiment of
In the example of
In some embodiments, all three operations described in reference to
In some embodiments, rather than adjusting the view of an object, such as discussed above (for example, changing a view rotation, magnification, translation, etc.), the user interface systems and methods discussed herein may also be used to adjust actual properties of objects in 3D space. For example, objects that are drawn in a computer aided drafting (CAD) application (or any other image generation/modification application) may be resized or rotated using the methods discussed above such that the actual properties of the objects in 3D space are modified.
In one embodiment, the methods discussed could be used to simultaneously select both the object to manipulate, and the rotation axis to lock. For example, touching the screen near an object may both select the object for manipulation and lock the rotation axis based on the touch position relative to the nearest object.
In one embodiment, touching the edges of the display frame may be used to select a mode in which the view of the entire collection of objects may be changed. In this embodiment, the position at which the screen is touched may be used to select how the scene should be rotated, using, for example, the method described with reference to
In addition to adjusting the rotation, position, zoom level, etc. of objects in the manner discussed above, in some embodiments the user interface functionality discussed above can be used to adjust other properties of images or objects, including viewing properties (for example, how stored objects are displayed without necessarily changing the actual stored object) and/or object properties (for example, actually changing characteristics of an image or 3D object). In general, the systems and methods discussed herein may be used to select and/or modify properties of images or objects that can be changed in response to characteristics of an initial touch (or other predefined command, such as a mouse click, voice command, gesture, etc.) with a user interface, while keeping other properties locked. For example,
In this example, touching the display at a side of the object (e.g., the head anatomy depicted in the medical image of
In
Finally, in
As noted above, the systems and methods described herein may provide, among other features, easy-to-learn, efficient, and/or unambiguous methods for controlling rotation and/or other manipulation of 2D and/or 3D images and/or objects. In addition, various embodiments of the systems and methods provided herein provide one or more of the following advantages:
1. Restricted Object Rotation
In some embodiments, when rotation is allowed only about a single axis (for example, rotation is restricted to be about a single axis) using any of the methods discussed above, movement of a cursor (or finger in a touchscreen embodiment) in only a single direction causes rotation and/or other manipulation about that selected axis. For example, with reference to
2. Guide Line Display with Cursor Proximity to Object
In some embodiments, guide lines appear in response to positioning and/or movement of a cursor (or finger in a touchscreen embodiment) over or near (for example, within a predetermined number of pixels of) an object to be rotated or other portion of a display, such as the x and y axes (whether visually displayed or not) or border of a user interface. This is in contrast to embodiments having the guide lines constantly displayed once a 3D rotation tool is selected. As discussed above, the various guide lines may appear as the user moves a cursor (or a finger touch on a display screen) to different areas surrounding an image to be rotated. Additionally, guide lines that show only the currently available axis (or axes) of rotation may be displayed to avoid confusion regarding the currently selected rotation axis.
3. Guide Line Display with Cursor Proximity to Image Axis
In some embodiments, guide lines appear as a user approaches an image axis (for example, the x or y axis of an image). For example when the cursor (or finger touch) approaches the x axis (for example, is within five, or some other number of, pixels of the x axis), a guide line may appear. In one embodiment, such rotation guide lines may appear and be available for rotation of an image when the user has another image manipulation tool selected (for example, the system does not require a 3D rotation tool to be selected in order to cause the guide lines to appear and rotations to be implemented).
In one embodiment, with a guide line displayed, the user activates a 3D rotation mode by providing another action, such as a left mouse click. With the 3D rotation mode selected, the user can initiate 3D rotations with movements, such as those discussed above. In one embodiment, once the user activates the 3D rotation mode, the selected guide line(s) disappears (or changes one or more display characteristics). In such embodiments, the 3D rotation mode may still be active (for example, until the user releases the left mouse button), but the guide line does not obfuscate the image. In other embodiments, the selected guide line disappears (or changes one or more display characteristics) in response to the cursor moving away from the guide line (for example, moving more than five pixels away from the x axis).
4. Guide Line Display Non-Overlapping with Object
In some embodiments, guide lines that indicate the currently available axis (or axes) of rotation are displayed outside of the actual image to be rotated. For example, with reference to
At block 1002, a 3D object is displayed to the user. The 3D object may be displayed on, for example, a touch screen display or any other type of display as described above. At block 1004, an input is received from the user that indicates the type of rotation initiated by the user. For example, in an embodiment, the user may touch a portion of the display indicating that the user wishes to limit rotations to horizontal rotations, or the user may touch another portion of the display indicating that the user wishes to limit rotations to vertical rotations, as described in reference to
At block 1006, guide lines related to the axis or axes of rotation, or other type of movement and/or manipulation, may optionally be display to the user. Display of the guide lines may be accomplished as generally described above. For example, in an embodiment a guide line may be displayed allowing the user to select a particular axis of rotation by, for example, touching, clicking on, and/or rolling over the guide line.
At block 1008, the axis or axes of rotation are determined based on the user input. For example, rotation about a y axis may be determined based on a user touching the bottom or top of the display. In another example, rotation about a z axis may be determined based on the user clicking on, or rolling over, an arced icon with a mouse pointer. Various other axes of rotation may be determined as described above.
At block 1010, rotation input is received from the user. For example, the user may slide a finger across a section of the display, and/or move a mouse pointer along a guide line. At block 1012, the 3D object is rotated based on the received rotation input. For example, the 3D object may be rotated about a horizontal or x axis as the user slides a finger vertically up or down the display. Alternatively, the movement of a mouse pointer may be received, causing rotation of the 3D object. Rotation of the 3D object may be limited or restricted based on the determined axis or axes of rotation of block 1008.
As described above, in an embodiment, guide lines may be displayed that indicate to the user the axis of rotation. In this embodiment, guide lines may be displayed, for example, concurrently with block 1010. In an embodiment, guide lines may be removed from the display once rotation input is received from the user so as to not obscure the 3D object as it is rotated.
In various embodiments, and as described above, other actions may be taken at, for example, block 1012. For example, in an embodiment characteristics of the 3D object may be altered at block 1012 rather than, or in addition to, rotation of the 3D object. Various other embodiments of the present disclosure may likewise be accomplished in corresponding blocks of
The computing device 150 may take various forms. In one embodiment, the information display computing device 150 may be a computer workstation having modules 151, such as software modules that provide the functionality described above with reference to
In an embodiment, the user interface module may be configured to display a multi-dimensional object on an electronic display, such as a display device 155 described below. In an embodiment, the user input module may be configured to receive inputs from a user at particular locations of the electronic display. The inputs may comprise, for example, mouse clicks or touch inputs at particular locations. The input may further indicate, for example, the initiation of a rotation function. In an embodiment, the object rotation module may be configured to access rotation rules associated with the multi-dimensional object. The rotation rules may indicate, for example, planes of rotation available for rotating the multi-dimensional object based on the particular locations of the inputs, as described in the various embodiments of the present description.
In one embodiment, the information display computing device 150 comprises a server, a desktop computer, a workstation, a Picture Archiving and Communication System (PACS) workstation, a laptop computer, a mobile computer, a smartphone, a tablet computer, a cell phone, a personal digital assistant, a gaming system, a kiosk, an audio player, any other device that utilizes a graphical user interface, including office equipment, automobiles, airplane cockpits, household appliances, automated teller machines, self-service checkouts at stores, information and other kiosks, ticketing kiosks, vending machines, industrial equipment, and/or a television, for example.
The information display computing device 150 may run an off-the-shelf operating system 154 such as a Windows, Linux, MacOS, Android, or iOS, or mobile versions of such operating systems. The information display computing device 150 may also run a more specialized operating system which may be designed for the specific tasks performed by the computing device 150, or any other available operating system.
The information display computing device 150 may include one or more computing processors 152. The computer processors 152 may include central processing units (CPUs), and may further include dedicated processors such as graphics processor chips, or other specialized processors. The processors generally are used to execute computer instructions based on the information display software modules 151 to cause the computing device to perform operations as specified by the modules 151. The modules 151 may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. For example, modules may include software code written in a programming language, such as, for example, Java, JavaScript, ActionScript, Visual Basic, HTML, Lua, C, C++, or C#. While “modules” are generally discussed herein with reference to software, any modules may alternatively be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.
The information display computing device 150 may also include memory 153. The memory 153 may include volatile data storage such as RAM or SDRAM. The memory 153 may also include more permanent forms of storage such as a hard disk drive, a flash disk, flash memory, a solid state drive, or some other type of non-volatile storage.
The information display computing device 150 may also include or be interfaced to one or more display devices 155 that provide information to the users. Display devices 155 may include a video display, such as one or more high-resolution computer monitors, or a display device integrated into or attached to a laptop computer, handheld computer, smartphone, computer tablet device, or medical scanner. In other embodiments, the display device 155 may include an LCD, OLED, or other thin screen display surface, a monitor, television, projector, a display integrated into wearable glasses, or any other device that visually depicts user interfaces and data to viewers.
The information display computing device 150 may also include or be interfaced to one or more input devices 156 which receive input from users, such as a keyboard, trackball, mouse, 3D mouse, drawing tablet, joystick, game controller, touch screen (for example, capacitive or resistive touch screen), touchpad, accelerometer, video camera and/or microphone.
The information display computing device 150 may also include one or more interfaces 157 which allow information exchange between information display computing device 150 and other computers and input/output devices using systems such as Ethernet, Wi-Fi, Bluetooth, as well as other wired and wireless data communications techniques.
The modules of the information display computing device 150 may be connected using a standard based bus system. In different embodiments, the standard based bus system could be Peripheral Component Interconnect (“PCI”), PCI Express, Accelerated Graphics Port (“AGP”), Micro channel, Small Computer System Interface (“SCSI”), Industrial Standard Architecture (“ISA”) and Extended ISA (“EISA”) architectures, for example. In addition, the functionality provided for in the components and modules of information display computing device 150 may be combined into fewer components and modules or further separated into additional components and modules.
The computing device 150 may communicate and/or interface with other systems and/or devices. In one or more embodiments, the computing device 150 may be connected to a computer network 190. The computer network 190 may take various forms. It may include a wired network or a wireless network, or it may be some combination of both. The computer network 190 may be a single computer network, or it may be a combination or collection of different networks and network protocols. For example, the computer network 190 may include one or more local area networks (LAN), wide area networks (WAN), personal area networks (PAN), cellular or data networks, and/or the Internet.
The computing device 150 may be configured to interface with various networked computing devices via the network 190 in order to provide efficient and useful review of data that. The server 210 may include any computing device, such as image acquisition and/or storage devices from which the computing device 150 accesses image data that is usable to generate 3D images for display in the manner discussed above.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.
All of the methods and processes described above may be embodied in, and partially or fully automated via, software code modules executed by one or more general purpose computers. For example, the methods described herein may be performed by an Information Display Computing Device and/or any other suitable computing device. The methods may be executed on the computing devices in response to execution of software instructions or other executable code read from a tangible computer readable medium. A tangible computer readable medium is a data storage device that can store data that is readable by a computer system. Examples of computer readable mediums include read-only memory, random-access memory, other volatile or non-volatile memory devices, CD-ROMs, magnetic tape, flash drives, and optical data storage devices.
Many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated.
This application is a continuation of U.S. patent application Ser. No. 13/872,920, filed Apr. 29, 2013, with application claims the benefit of priority under 35 U.S.C. §119(e) from U.S. Provisional Application No. 61/640,553, filed Apr. 30, 2012, titled “DISPLAY OF 3D IMAGES,” the disclosures of each of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
61640553 | Apr 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13872920 | Apr 2013 | US |
Child | 15076390 | US |