The technology disclosed relates generally to interaction with three-dimensional (3D) data, and in particular to providing a system and method to navigate 3D data on mobile and desktop platforms.
The subject matter discussed in this section should not be assumed to be prior art merely as a result of its mention in this section. Similarly, a problem mentioned in this section or associated with the subject matter provided as background should not be assumed to have been previously recognized in the prior art. The subject matter in this section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.
Multidimensional data representations can be very useful in conveying information to a data consumer. But high-dimensional data sets can be difficult to comprehend. A visualization of the data can help with communication of the embedded information, and can be realized through visual representations such as statistical graphics, plots, information graphics, tables, and charts. Data visualization can be defined as the communication of abstract data through interactive visual interfaces, which can be used to present and explore the data. Data represented in two dimensions is relatively easy to consume and explore. Data represented in three dimensions, especially as represented on a two dimensional surface such as a computer display, adds a level of complexity to consumption and exploration. Data in four or more dimensions can be vastly more difficult to understand and manipulate. The technology disclosed describes a plurality of hand gestures that facilitate how a data consumer can traverse high-dimensional data sets constructed by a data source.
The included drawings are for illustrative purposes and serve only to provide examples of structures and process operations for one or more implementations of this disclosure. These drawings in no way limit any changes in form and detail that can be made by one skilled in the art without departing from the spirit and scope of this disclosure. A more complete understanding of the subject matter can be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
The following detailed description is made with reference to the figures. Sample implementations are described to illustrate the technology disclosed, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art will recognize a variety of equivalent variations on the description that follows.
High-dimensional data sets are common in many fields such as healthcare, information technology, and ecology. For example, data on the health status of patients can include dimensions such as blood analysis over time, genetic background, surgical history, diseases, and pharmacology. The data can be linked so that a viewer can traverse the data based on interest. For example, a viewer might begin a review of a patient's information with their blood analyses, and then choose to review disease information to find an association.
With information technology, high-dimensional data can include planning for hardware and software implementations, utilization of existing systems, and operational support. Operational support can include a call center, with call tracking for hardware failures, hacking incidents, and natural disasters, where each of these categories has its own list of subsets. The technology disclosed can assist with the traversal of a high-dimensional data set such as one generated by a Salesforce Analytics Cloud using a device such as a stereoscopic 3d viewer with a smartphone, a pair of 3D glasses with a desktop, Oculus Rift, or Google Cardboard. Navigation is possible in all 3 dimensions (x,y,z) of a 3D representation of data, as well as forward and backward through a high-dimensional data set. The technology disclosed can include visual feedback as well as haptic feedback through special purpose gloves such as GloveOne.
The technology disclosed comprises three gestures: point and dive to drill into a data set, grab and pluck to initiate a 3D visualization change, and de-clutter to focus the visualization on two of three dimensions on a display. The point and dive gesture allows a viewer to choose a data point within a virtualized 3D representation, and invoke a menu identifying a plurality of options. The grab and pluck gesture allows a viewer to pick a data point from a plurality of data points, and expose details about the data point. The de-clutter gesture allows a viewer to simplify a 3D representation of data to a 2D representation.
As used herein, a given signal, event or value is “based on” a predecessor signal, event or value of the predecessor signal, event or value influenced by the given signal, event or value. If there is an intervening processing element, step or time period, the given signal, event or value can still be “based on” the predecessor signal, event or value. If the intervening processing element or step combines more than one signal, event or value, the signal output of the processing element or step is considered “based on” each of the signal, event or value inputs. If the given signal, event or value is the same as the predecessor signal, event or value, this is merely a degenerate case in which the given signal, event or value is still considered to be “based on” the predecessor signal, event or value. “Responsiveness” or “dependency” of a given signal, event or value upon another signal, event or value is defined similarly.
As used herein, the “identification” of an item of information does not necessarily require the direct specification of that item of information. Information can be “identified” in a field by simply referring to the actual information through one or more layers of indirection, or by identifying one or more items of different information which are together sufficient to determine the actual item of information. In addition, the term “specify” is used herein to mean the same as “identify.”
Referring first to
Cameras 102, 104 are preferably capable of capturing video images (i.e., successive image frames at a constant rate of at least 15 frames per second); although no particular frame rate is required. The capabilities of cameras 102, 104 are not critical to the technology disclosed, and the cameras can vary as to frame rate, image resolution (e.g., pixels per image), color or intensity resolution (e.g., number of bits of intensity data per pixel), focal length of lenses, depth of field, etc. In general, for a particular application, any cameras capable of focusing on objects within a spatial volume of interest can be used. For instance, to capture motion of the hand of an otherwise stationary person, the volume of interest can be defined as a cube approximately one meter on a side.
In some implementations, the illustrated system 100A includes one or more sources 108, 110, which can be disposed to either side of cameras 102, 104, and are controlled by sensory-analysis system 106. In one implementation, the sources 108, 110 are light sources. For example, the light sources can be infrared light sources, e.g., infrared light-emitting diodes (LEDs), and cameras 102, 104 can be sensitive to infrared light. Use of infrared light can allow the sensory analysis system 100A to operate under a broad range of lighting conditions and can avoid various inconveniences or distractions that may be associated with directing visible light into the region where the person is moving. However, a particular wavelength or region of the electromagnetic spectrum can be required. In one implementation, filters 120, 122 are placed in front of cameras 102, 104 to filter out visible light so that only infrared light is registered in the images captured by cameras 102, 104. In another implementation, the sources 108, 110 are sonic sources providing sonic energy appropriate to one or more sonic sensors (not shown in
It should be stressed that the arrangement shown in
In operation, light sources 108, 110 are arranged to illuminate a region of interest 112 that includes a control object portion 114 that can optionally hold a tool or other object of interest and cameras 102, 104 are oriented toward the region 112 to capture video images of the hands 114.
The computing environment can also include other removable/non-removable, volatile/nonvolatile computer storage media. For example, a hard disk drive can read or write to non-removable, nonvolatile magnetic media. A magnetic disk drive can read from or write to a removable, nonvolatile magnetic disk, and an optical disk drive can read from or write to a removable, nonvolatile optical disk such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The storage media are typically connected to the system bus through a removable or non-removable memory interface.
Processor 132 can be a general-purpose microprocessor, but depending on implementation can alternatively be a microcontroller, peripheral integrated circuit element, a CSIC (customer-specific integrated circuit), an ASIC (application-specific integrated circuit), a logic circuit, a digital signal processor, a programmable logic device such as an FPGA (field-programmable gate array), a PLD (programmable logic device), a PLA (programmable logic array), an RFID processor, smart chip, or any other device or arrangement of devices that is capable of implementing the actions of the processes of the technology disclosed.
Sensor interface 136 can include hardware and/or software that enables communication between computer system 100B and cameras such as cameras 102, 104 shown in
Sensor interface 136 can also include controllers 147, 149, to which light sources (e.g., light sources 108, 110) can be connected. In some implementations, controllers 147, 149 provide operating current to the light sources, e.g., in response to instructions from processor 132 executing mocap program 144. In other implementations, the light sources can draw operating 20 current from an external power supply, and controllers 147, 149 can generate control signals for the light sources, e.g., instructing the light sources to be turned on or off or changing the brightness. In some implementations, a single controller can be used to control multiple light sources.
Instructions defining mocap program 144 are stored in memory 134, and these instructions, when executed, perform motion-capture analysis on images supplied from cameras connected to sensor interface 136. In one implementation, mocap program 144 includes various modules, such as an object detection module 152, an object analysis module 154, and a gesture recognition module 156. Object detection module 152 can analyze images (e.g., images captured via sensor interface 136) to detect edges of an object therein and/or other information about the object's location. Object analysis module 154 can analyze the object information provided by object detection module 152 to determine the 3D position and/or motion of the object (e.g., a user's hand). Examples of operations that can be implemented in code modules of mocap program 144 are described below. Memory 134 can also include other information and/or code modules used by mocap program 144 such as an application platform 166 that allows a user to interact with the mocap program 144 using different applications like application 1 (App1), application 2 (App2), and application N (AppN).
Display 138, speakers 139, keyboard 140, and mouse 141 can be used to facilitate user interaction with computer system 100B. In some implementations, results of gesture capture using sensor interface 136 and mocap program 144 can be interpreted as user input. For example, a user can perform hand gestures that are analyzed using mocap program 144, and the results of this analysis can be interpreted as an instruction to some other program executing on processor 132 (e.g., a web browser, word processor, or other application). Thus, by way of illustration, a user might use upward or downward swiping gestures to “scroll” a webpage currently displayed on display 138, to use rotating gestures to increase or decrease the volume of audio output from speakers 139, and so on.
It will be appreciated that computer system 100B is illustrative and that variations and modifications are possible. Computer systems can be implemented in a variety of form factors, including server systems, desktop systems, laptop systems, tablets, smart phones or personal digital assistants, wearable devices, e.g., goggles, head mounted displays (HMDs), wrist computers, and so on. A particular implementation can include other functionality not described herein, e.g., wired and/or wireless network interfaces, media playing and/or recording capability, etc. In some implementations, one or more cameras can be built into the computer or other device into which the sensor is imbedded rather than being supplied as separate components. Further, an image analyzer can be implemented using only a subset of computer system components (e.g., as a processor executing program code, an ASIC, or a fixed-function digital signal processor, with suitable I/O interfaces to receive image data and output analysis results).
While computer system 100B is described herein with reference to particular blocks, it is to be understood that the blocks are defined for convenience of description and are not intended to imply a particular physical arrangement of component parts. Further, the blocks need not correspond to physically distinct components. To the extent that physically distinct components are used, connections between components (e.g., for data communication) can be wired and/or wireless as desired.
With reference to
In one implementation, the gesture-recognition module 156 compares the detected gesture to a library of gestures electronically stored as records in a database, which is implemented in the sensory-analysis system 106, the electronic device, or on an external storage system. (As used herein, the term “electronically stored” includes storage in volatile or nonvolatile storage, the latter including disks, Flash memory, etc., and extends to any computationally addressable storage media (including, for example, optical storage).) For example, gestures can be stored as vectors, i.e., mathematically specified spatial trajectories, and the gesture record can have a field specifying the relevant part of the user's body making the gesture; thus, similar trajectories executed by a user's hand and head can be stored in the database as different gestures so that an application can interpret them differently.
The gesture recognition system 100B can be used to recognize the gestures described by the technology disclosed to navigate through, and take action on 3D data visualizations. In one implementation,
The so-called point and dive gesture selects a particular data point from a 3D presentation and causes display of more information about the selected data point, in a context of other data that may not have been present in the first 3D presentation, as explained below in the context of
The point and dive gesture begins with an index finger 225 of a user pointing at a data point 215. The sensory analysis system 100A recognizes that the user is pointing an index finger at the data point 215. The gesture recognition system 100B responds by identifying the data point 215 selected. In this example, a circle 220 is rendered around the data point 215 by the gesture recognition system 100B. In other implementations, other techniques for identifying the selection can be used such as variations in color, variations in shading, or other visual cues. Once the desired data point 215 has been identified, the palms of the user's right and left hands are brought together, with fingers pointing toward the selected data point 215. In one implementation the fingers are below a horizontal plane. In another implementation, the fingers are below a plane between the hands together and the display. The system then renders a predefined menu that is rendered onto the display as an acknowledgement of the gesture, the menu being a list of views or actions to be taken on the underlying data within the high-dimensional data structure. If there is one item in the menu, the system can automatically chose that item. If there are two or more menu options that can be chosen by the user, the user can point to her selection on the menu with an index finger.
The grab and pluck gesture is directed to a selectable feature, such as a flag, in the 3D data display.
In one implementation, the grab and pluck gesture begins with an open hand 321 of a user reaching toward a data point 311. The sensory analysis system 100A recognizes that the user is reaching toward the data point 311. The gesture recognition system 100B responds by identifying the data point 311 selected. In this example, a circle 320 is rendered around the data point 311 by the gesture recognition system 100B. In other implementations, other techniques for identifying the selection can be used such as variations in color, variations in shading, or other visual cues. In another implementation, the grab and pluck gesture begins with a pointing gesture that causes selection of a particular data point (like data point 311).
Once the data point of interest has been selected, the user begins to close the fingers 327 of the open hand in on each other. In one implementation, a visual cue is rendered on the display indicating the step of the gesture, whereas in another implementation, no visual cue is rendered. The sensory analysis system 100A recognizes the grab gesture when a thumb and one or more fingers of the reaching hand touch, or when the hand is closed into a fist 331. To complete the grab and pluck gesture, the user can begin moving the closed hand away from the selected object 337. This initiates a visualization change, the change being dependent on the flag that was grabbed. An example of a result of the grab and pluck gesture is illustrated in
The de-clutter gesture applies to a 3D display with labeled and unlabeled data points, and allow the user to remove the non-labelled points from the 3D display, leaving a simpler display of the labelled points. In one implementation,
In one implementation of a high-dimensional data set that can be used to illustrate the three gestures disclosed, a high-dimensional data set is created that summarizes operational support statistic. Operation support can include a call center, with call tracking for hardware failures, hacking incidents, and natural disasters, where each of these categories has its own list of subsets. These subsets can include the following:
Hardware failures
Hacking incidents
Natural disasters
These operational support issues are illustrated in
Flag B 527 could be a natural disaster such as a tsunami warning that caused support staff to leave the facility, with a low number of cases 563, a high resolution time 567, but a high customer satisfaction score 505 due to a perceived execution of a good process during a natural disaster.
Flag C 525 could be a phishing scam (hack), where there are a medium number of cases 563, a medium resolution time 567, but a low customer satisfaction score 505, as it could be assumed by the customers that phishing scams are annoying at any level.
Flag D 535 could be lighting struck data center 4 times. This example would show a large number of cases 563, with high resolution times 567 due to multiple effects on power. This example still shows high customer satisfaction 505 due to an understanding that there's a serious natural disaster.
Flag E 523 could be a low impact social engineering hack such as a virus hoax that results in a large number of cases 563 that are quickly resolved, but that aggravate customers so that they leave relatively low customer satisfaction scores.
As illustrated in
The 3D data display can be generated using a variety of techniques, e.g. holographic projection systems, wearable goggles or other head mounted displays (HMDs), heads up displays (HUDs). In one implementation, the 3D data is projected onto a conference room table for a meeting so it can be explored like an architect's building model. In such an implementation, attendees can view the 3D data using a head mounted display like Oculus Rift™ or wearable goggles like Google Glass™.
In other implementations, voice commands can be used in conjunction with the gestures discussed above to manipulate the 3D data display. In one implementation, a voice command can precede or follow a hand gesture like a point gesture, or a point and dive gesture or a grab and pluck gesture or a de-clutter gesture. In some implementations, a voice command issued to change an existing mode and share content. For instance, if there multiple modes to a 3D data display, such as an explore mode, a filter mode, an exit mode or an annotate mode, a voice command can cause transition from one mode to another or from a mode generated by a gesture to a new mode generated by the voice command. In other implementations, a voice command can perform data manipulations such a grouping or sorting data based on one or more parameters or criteria. In a further implementation, a voice command can be used to share content across an online social network or between users in an intra-organization network. Examples of such networks include Facebook™, Twitter™, YouTube™, Chatter™, and the likes.
In yet another implementation, voice commands can be used to verbally issue natural language processing (NLP) queries such as “who are my top 10 sales representative who sold the most last quarter in the North America region?” Such a verbal NLP query is responded by generating resulting visualizations, without requiring a user to employ one or more touch commands that usually require issuing multiple sub-commands such as grouping, sorting, listing, etc.
In one implementation, the technology disclosed utilizes haptic feedback to allow interaction with the 3D data display. In such an implementation, a haptic projector is used that is receptive of different haptic properties of the virtual 3D objects (e.g. flag, pin, or any other data point), including surface texture and relative elevation. The haptic projector communicates different haptic properties of the 3D objects through vibration frequencies of varying magnitudes that apply pressure on a specialized device worn by a user, e.g. a haptic-enabled glove or feeler. Such a specialized device simulates tactile sensations with application of mechanical force on the user's hand. Thus, in one anomaly detection implementation, the haptic-enabled glove detects a data point on the 3D data display that is a certain standard deviation away from a moving average. The haptic-enabled glove translates the peak of the anomalous data point into corresponding pressure applied on a user's hand. In some implementations, such sensory information like pressure or vibrations are communicated to the user when the user's hand traverses the anomalous data point on the 3D data display.
In yet other implementations, a degree of pressure applied on the user's hand via the haptic-enabled glove is responsive to the degree of deviation of the anomalous data point such that greater the peak of the anomalous data point, higher the applied pressure. In other implementations, the degree of pressure applied on the user's hand via the haptic-enabled glove is responsive to the magnitude of a value of data point.
In a further implementation, detection of different data points is distinguished by applying pressure or other sensory information on different portions of a user's hand. For instance, a first type of flag causes vibration being applied on an index finger of a user's hand, and a second type of flag causes vibration being applied on a thumb of the user's hand.
In one implementation, described is a method of supporting navigation through three-dimensional (3D) data presented stereoscopically to a viewer. The method includes causing stereoscopic display of three-dimensional (3D) data to a viewer and receiving gesture data from a 3D sensor that is monitoring a 3D sensory space, the gesture data indicating a user performing a point and dive gesture sequence. The gesture sequence observed in the 3D sensory space includes an index finger pointing to a point on a surface of the 3D data display, followed by convergence of right and left hand palms. The method further causes updating of the 3D data display, based on recognition of the pointing to the surface, to graphically depict selection of data responsive to the pointing, updating of the 3D data display, based on recognition of the convergence of right and left hand palms, to include a menu and updating of the 3D data display, based on recognition of a pointing to an item on the menu, to graphically depict a selection among the menu choices.
This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations in previous sections of the application.
In one implementation, the convergence of right and left hand palms further includes fingers pointing towards the 3D data display below a horizon. In another implementation, the pointing to the item on the menu includes using an index finger.
The method further includes receiving gesture data from the 3D sensor specifying detection of a dive gesture, wherein the received gesture data includes a signal to drill down into data selected by the pointing and causing updating of the 3D data display to graphically depict drilling down into the selected data.
The method also includes causing updating of the 3D data display to graphically depict a menu in response to the drilling down and causing updating of the 3D data display to graphically select a menu item from the menu in response to a pointing gesture performed after the drilling down.
Other implementations may include a computer implemented system to perform any of the methods described above, the system including a processor, memory coupled to the processor, and computer instructions loaded into the memory. Yet another implementation may include a tangible non-transitory computer readable storage medium impressed with computer program instructions that cause a computer to implement any of the methods described above. The tangible computer readable storage medium does not include transitory signals.
In another implementation, described is a method of supporting navigation through three-dimensional (3D) data presented stereoscopically to a viewer. The method includes causing stereoscopic display of three-dimensional (3D) data that includes selectable flags and receiving gesture data from a 3D sensor that is monitoring a 3D sensory space, the gesture data indicating a gesture sequence of a user plucking one of the selectable flags. The gesture sequence includes one or more fingers pointing at one of the selectable flags to select at least one flag, followed by one or more fingertips of a hand directed to plucking the selected flag, which is in turn followed by a hand or two or more fingers of the hand closing on and plucking the selected flag. The method further includes updating of the 3D data display, based on recognition of the finger pointing, to graphically depict selectable flags responsive to the finger pointing, updating of the 3D data display, based on recognition of the fingertips of the hand directed to plucking, to graphically identify the selected flag and updating of the 3D data display, based on recognition of the hand or two or more fingers of the hand closing on and plucking the selected flag, to graphically depict plucking of the selected flag.
This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations in previous sections of the application.
The method also includes causing updating of the 3D data display with a change in visualization, the change dependent on the flag plucked.
Other implementations may include a computer implemented system to perform any of the methods described above, the system including a processor, memory coupled to the processor, and computer instructions loaded into the memory. Yet another implementation may include a tangible non-transitory computer readable storage medium impressed with computer program instructions that cause a computer to implement any of the methods described above. The tangible computer readable storage medium does not include transitory signals.
In a further implementation, described is a method of supporting navigation through three-dimensional (3D) data presented stereoscopically to a viewer. The method includes causing stereoscopic display of three-dimensional (3D) data that includes a plurality of data points graphically arranged along three axes of the 3D data display and receiving gesture data from a 3D sensor that is monitoring a 3D sensory space, the gesture data indicating a de-clutter gesture sequence performed by a user. The de-clutter gesture sequence includes backs of right and left hands of the user initially together and followed by, backs of the hands moving apart. The method further includes updating of the 3D data display, based on recognition of the de-clutter gesture, to graphically transition, from the plurality of data points along the three axes, to a set of points selected from the plurality and arranged along two axes of the 3D data display. The graphical transition causes updating of the 3D data display to rearrange the set of points from respective initial positions along the three axes to respective final positions along the two axes.
This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations in previous sections of the application.
Other implementations may include a computer implemented system to perform any of the methods described above, the system including a processor, memory coupled to the processor, and computer instructions loaded into the memory. Yet another implementation may include a tangible non-transitory computer readable storage medium impressed with computer program instructions that cause a computer to implement any of the methods described above. The tangible computer readable storage medium does not include transitory signals.
The present Application for Patent is a continuation of U.S. patent application Ser. No. 15/015,010 by Kodali et al., entitled “SYSTEM AND METHOD TO NAVIGATE 3D DATA ON MOBILE AND DESKTOP” filed Feb. 3, 2016, assigned to the assignee hereof.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6161149 | Achacoso et al. | Dec 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6323859 | Gantt | Nov 2001 | B1 |
6324568 | Diec | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6462733 | Murakami | Oct 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772229 | Achacoso et al. | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker et al. | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7062502 | Kesler | Jun 2006 | B1 |
7069231 | Cinarkaya et al. | Jun 2006 | B1 |
7069497 | Desai | Jun 2006 | B1 |
7181758 | Chan | Feb 2007 | B1 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7356482 | Frankland et al. | Apr 2008 | B2 |
7401094 | Kesler | Jul 2008 | B1 |
7412455 | Dillon | Aug 2008 | B2 |
7508377 | Pihlaja | Mar 2009 | B2 |
7508789 | Chan | Mar 2009 | B2 |
7603483 | Psounis et al. | Oct 2009 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
7698160 | Beaven et al. | Apr 2010 | B2 |
7779475 | Jakobson et al. | Aug 2010 | B2 |
7851004 | Hirao et al. | Dec 2010 | B2 |
8014943 | Jakobson | Sep 2011 | B2 |
8015495 | Achacoso et al. | Sep 2011 | B2 |
8032297 | Jakobson | Oct 2011 | B2 |
8073850 | Hubbard et al. | Dec 2011 | B1 |
8082301 | Ahlgren et al. | Dec 2011 | B2 |
8095413 | Beaven | Jan 2012 | B1 |
8095594 | Beaven et al. | Jan 2012 | B2 |
8209308 | Rueben et al. | Jun 2012 | B2 |
8209333 | Hubbard et al. | Jun 2012 | B2 |
8275836 | Beaven et al. | Sep 2012 | B2 |
8405604 | Pryor | Mar 2013 | B2 |
8457545 | Chan | Jun 2013 | B2 |
8484111 | Frankland et al. | Jul 2013 | B2 |
8490025 | Jakobson et al. | Jul 2013 | B2 |
8504945 | Jakobson et al. | Aug 2013 | B2 |
8510045 | Rueben et al. | Aug 2013 | B2 |
8510664 | Rueben et al. | Aug 2013 | B2 |
8514221 | King | Aug 2013 | B2 |
8566301 | Rueben et al. | Oct 2013 | B2 |
8646103 | Jakobson et al. | Feb 2014 | B2 |
8756275 | Jakobson | Jun 2014 | B2 |
8769004 | Jakobson | Jul 2014 | B2 |
8769017 | Jakobson | Jul 2014 | B2 |
9104271 | Adams | Aug 2015 | B1 |
9996797 | Holz | Jun 2018 | B1 |
10168873 | Holz | Jan 2019 | B1 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel et al. | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robins | Nov 2002 | A1 |
20030004971 | Gong et al. | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker | Apr 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20050195156 | Pihlaja | Sep 2005 | A1 |
20060021019 | Hinton et al. | Jan 2006 | A1 |
20060187196 | Underkoffler | Aug 2006 | A1 |
20080122786 | Pryor | May 2008 | A1 |
20080249972 | Dillon | Oct 2008 | A1 |
20090031240 | Hildreth | Jan 2009 | A1 |
20090063415 | Chatfield et al. | Mar 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20100053151 | Marti | Mar 2010 | A1 |
20100149096 | Migos | Jun 2010 | A1 |
20100234094 | Gagner | Sep 2010 | A1 |
20110218958 | Warshavksy et al. | Sep 2011 | A1 |
20110247051 | Bulumulla et al. | Oct 2011 | A1 |
20120042218 | Cinarkaya et al. | Feb 2012 | A1 |
20120233137 | Jakobson et al. | Sep 2012 | A1 |
20120249429 | Anderson | Oct 2012 | A1 |
20120268364 | Minnen | Oct 2012 | A1 |
20120275686 | Wilson | Nov 2012 | A1 |
20120290407 | Hubbard et al. | Nov 2012 | A1 |
20120290950 | Rapaport | Nov 2012 | A1 |
20130212497 | Zelenko et al. | Aug 2013 | A1 |
20130247216 | Cinarkaya et al. | Sep 2013 | A1 |
20150054729 | Minnen | Feb 2015 | A1 |
20150199022 | Gottesman | Jul 2015 | A1 |
20150355805 | Chandler | Dec 2015 | A1 |
20170177087 | Lerner | Jun 2017 | A1 |
20170235377 | Marcolina | Aug 2017 | A1 |
Entry |
---|
MaxKortlander, “Data Visualisation”, Masters of Media, retrieved Jan. 15, 2016 <https://digialmethods,net/MoM/DataVisualisation> Sep. 22, 2014, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20200019296 A1 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15015010 | Feb 2016 | US |
Child | 16579695 | US |