The disclosed embodiments relate generally to the field of electronic devices, and in particular to user interfaces for electronic devices.
A graphical user interface (GUI) of a computer application often provides numerous menu options for a user to interact with the computer application and invoke commands. A user typically accesses a menu option by providing a user input at the menu option via a user input device (e.g., a mouse, a touchpad, a touch screen, a spatial operating interface, and so on). An example of a traditional menu is a linear menu that provides a sequential selection of menu options. Typically, a linear menu is arranged in a hierarchical tree and displayed at the top of a GUI. For example, a user moves a cursor to a top-level menu item of the linear menu and selects the menu item to invoke a command or to display a submenu with additional selections that include command options or further submenus. The user may select successive submenu items to invoke a command.
Another example of a traditional menu is a ribbon menu where a set of toolbars are placed on tabs in a tab-bar, typically displayed along the top of a GUI. A user selects an option on the ribbon menu to invoke a command or to display a set of additional options, typically represented by icons, along the width of the application and below the top-level set of options. However, the positions of these traditional menus are generally fixed and require the user to move the cursor across a particular distance to a position within the menu to access menu selections.
The long hierarchical structure of these traditional menus creates a number of problems. These include: A portion of the menu may disappear from the GUI view due to space constraints; a user has to provide a user input via a user input device multiple times (e.g., multiple cursor clicks and touch taps) to reach a desired command on the menu, thus decreasing the operational efficiency of the user; it is difficult for a user to remember a location of a command that resides somewhere in the hierarchy of the menu, and thus the user requires time to locate the command; the menu is at a fixed location, and therefore a user has to move the cursor a considerable distance to access the menu from the user's specified location on the GUI. Because of these problems, users may distribute their limited cognitive resources and attention to the navigation and searching of targeted functions rather than on using the targeted functions for specific tasks. The above problems are amplified when users work on image-heavy applications where the users continuously switch between various functions to interact with or alter images across multiple user interfaces.
Some embodiments are illustrated by way of example and not limitation in the FIGS. of the accompanying drawings, in which:
Like reference numerals refer to corresponding parts throughout the drawings.
The present disclosure describes methods, systems, and computer program products for enhancing operational efficiency through development of muscle memory and spatial cognition. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the various aspects of different embodiments. It will be evident, however, to one skilled in the art, that the any particular embodiment may be practiced without all of the specific details and/or with variations permutations and combinations of the various features and elements described herein.
A system for enhancing operational efficiency through development of muscle memory and spatial cognition is disclosed. Radial menus are used to develop a user's familiarity with the actions that need to be taken to cause a particular action to occur on a computer device. First, the radial menu acts as a visual guide, updating its appearance as the user navigates through the wedges of the menu. However, as a user repeats certain frequently used commands, the speed and accuracy with which the user can navigate the radial menu to a desired menu item or icon increases. As such, the need for the visual guide may be decreased. The computer device can detect when the user become sufficiently familiar with the actions needed to activate a certain command, and may not display the radial menu if it determines that the user is entering the actions needed to activate that command. However, if the user makes an unexpected action or pauses for too long, the computer device can then redisplay the radial menu for ease of use. In this way the user may learn to use certain actions quickly and efficiently without the need for the radial menu.
A radial menu works such that in response to an initiation input (e.g., a tap gesture) the initial section of the radial menu is displayed. For example, the initial display includes a center circle and two or more options displayed around the center circle as a series of wedges. The user then indicates one of the two or more options. The computer device will monitor the user input to identify user selection of one of the two or more options. In response, the user interface is updated to include an additional set of menu items based on the selected wedge. For example, if the user selects the “Edit” wedge of the radial menu, the radial menu is updated to display four additional menu items including “Copy,” “Paste,” “Cut,” and “Select All.” The additional options are displayed such that they are positioned near or adjacent to the wedge with which they are associated.
Thus, selecting specific actions or commands involves navigating multiple layers of a radial menu. A respective command (or action) will always appear in the same position of the default radial menu, and thus the specific input (e.g., series of finger gestures) used to access the respective command may remain constant. Over time, users will begin to internalize the specific input needed to access commonly used commands (e.g., a specific sequence of swipe gestures). Once the computer device determines that the user has become sufficiently familiar with the input needed for a particular command, the computer device adds that command to a list of reference commands, for example, commands that are well-known to a specific user. For example, if the computer device determines that the user is able to complete the specific user input for a particular command without waiting for the radial menu to visually update, the computer device determines that the user input is well known to the user and adds the command to the list of reference commands. In other examples, the computer device determines that an action is well known based on the average number of times a day the user performs the user input to the command.
Once a command is on the list of reference commands, the computer device does not need to display the radial menu as the user performs the user input associated with the command. Thus, when a user begins entering a user input (e.g., a gesture), the computer device compares each component of the user input (e.g., each component of a multi-component gesture) to the input components associated with the reference commands. For example, if a reference command has an associated input with three components (e.g., down, right, up), the computer device then determines whether the first component for an input is down. If the first received component does not match the first component of the reference command, the computer device is able to determine that the input does not match the input for the reference command.
If the first received component does match the first component of the reference command, the computer device then continues to monitor further input components. As each input component is received, the computer device compares it to the next component in the reference command input component list.
In accordance with a determination that any input component does not match the related input component of a reference command, the computer device then displays the appropriate radial menu. However, in accordance with a determination that the current series of input components fully matches an input for a particular reference command, the computer device does not display a radial menu to the user.
In some example embodiments, the computer devices stores one or more gesture macros, wherein a gesture macro is a simple gesture that is well-known to the user that is attached to a more complicated gesture or command. The user can then enter the simpler gesture macro to activate the more complicated gesture or series of gestures.
In some example embodiments, the computer device, when detecting a user input one or more components of a reference command, displays a likely pattern for the reference command (e.g., a visually gesture path on a touch screen). In some example embodiments, this pattern would only be displayed when learning a new command and would eventually not be needed.
In some example embodiments, the computer device has the technology to recognize brain patterns as input (e.g., a head mounted sensor device). The computer device then senses neural activity to detect components of a multi-component command.
In some example embodiments, the computer device could enable a user to validate the user's identity through a reference command. For example, the user specific validation command (e.g., similar to a gesture password) has a specific combination of speed, motion, size, and so on that the user uses to log into the computer device.
In some embodiments, as shown in
As shown in
As shown in
In some embodiments, the user profile data 130 includes data associated with the user, including but not limited to user name, user age, user location, user activity data (e.g., applications and commands used by the user), and other data related to and obtained from the user.
The command gesture data 132 includes data related to the radial menu and the gestures associated with a plurality of commands that may be executed by the computer device 120. The command gesture data 132 also includes one or more reference commands, wherein a reference command is a command that the user is sufficiently familiar with that the user can execute the gesture associated with the command without needing the radial menu to be displayed (e.g., it is well-known to the user).
The computer device 120 provides a broad range of other applications and services that allow users the opportunity to share and receive information, often customized to the interests of the users.
In some embodiments, the application logic layer includes various application server modules, which, in conjunction with the user interface module(s) 122, generate various user interfaces to receive input from and deliver output to a user. In some embodiments, individual application modules are used to implement the functionality associated with various applications, services, and features of the computer device 120. For instance, a messaging application, such as an email application, an instant messaging application, or some hybrid or variation of the two, may be implemented with one or more application modules. Similarly, a web browser enabling members to view web pages may be implemented with one or more application modules. Of course, other applications or services that utilize a radial menu module 124 and an input analysis module 126 may be separately implemented in their own application modules.
In addition to the various application server modules, the application logic layer includes a radial menu module 124 and an input analysis module 126. As illustrated in
Generally, the radial menu module 124 displays and updates a radial menu as a user navigates through it. In some example embodiments, the radial menu module 124 only displays a radial menu in response to a specific initiation input from a user. An initiation input is any input that lets the computer device 120 know that the next input will be command input, as opposed to a regular user input. For example, a tap and hold gesture on a specific section of a touch screen display, pressing a specific button on a smart phone, or pressing a specific keyboard key combination may all alert the computer device 120 that the user wishes to input a command.
Once the initiation input is received by the computer device 120, the radial menu module 124 causes the basic radial menu to be displayed in the user interface. The basic radial menu includes one or more high-level menu options, each of which is positioned around a central area (e.g., a circle that is positioned where the initiation input was detected.) The computer device 120 then detects further input from the user to select one of the high-level menu options (e.g., the high-level options may include Edit, File, View, Input, and so on). In some example embodiments, the further input includes an input component (e.g., a gesture component) from the original input position to the section of the radial menu that represents one of the high-level menu options.
In response to input showing user selection of a respective high-level menu option in the plurality of displayed high-level menu options (e.g., movement of a finger into the area of the display representing the respective high-level menu option), the radial menu module 124 then updates the radial menu to include a second level of options. The second level of options is determined by the selected high-level option and is displayed proximate to the selected high-level option. For example, if the selected high-level option was “View”, the second-level options may include “Zoom in”, “Zoom out”, “Full Screen”, “Minimize”, and so on. These second-level options are then displayed adjacent to the “View” high-level option.
The radial menu module 124 then detects a second command component input from the user. The second command component represents selection of one of the displayed second-level options (e.g., a gesture to the second-level options). If the selected second-level option represents a completed command, the computer device 120 then executes the selected command. However, the second-level option may represent a further group of options (e.g., if the user selects “Zoom In”, there are many different zoom amounts that the user can select). In response, the radial menu module 124 would display yet another level of command options (e.g., third-level options). Indeed, the radial menu module 124 can display an arbitrary number of option levels.
Generally, the input analysis module 126 analyzes input received from a user to determine whether the user has learned specific command inputs and to determine whether a specific set of gesture components is part of a reference command input.
Each time a user uses the radial menu to select a particular command (e.g., through a plurality of input components), the input analysis module 126 determines whether that command should be added to the list of reference commands. In some example embodiments, the input analysis module 126 determines that a given command should be added to the list of reference commands if the user executes the command within a predefined amount of time (e.g., a user who executes a multi-component command very quickly likely knows the command). In some example embodiments, the radial menu module 124 displays and updates a radial menu as a user navigates through it. In some example embodiments, the input analysis module 126 determines that a given command is well known to the user and should be added as a reference command if the user executes the command such that at least some components of the multi-component command are received from the user before the radial menu has been updated to display the associated options (e.g., the user is entering the full multi-component command faster than the radial menu can update).
The input analysis module 126 builds a list of reference commands that the user is able to enter without needing the radial menu for reference. The input analysis module 126 then analyzes each input component to determine whether to display the radial menu or not.
The input analysis module 126 detects a first component of a command input. The input analysis module 126 then determines whether the detected first component matches the first component of any of the stored list of reference commands. In accordance with a determination that the first component does not match the first component of any reference command, the input analysis module 126 causes the radial menu to be displayed.
In accordance with a determination that the first component matches at least one of the reference commands, the input analysis module 126 prevents the radial menu from being displayed.
This process repeats, with the input analysis module 126 analyzing each new input component to determine whether the combined already received components match, as a group and in order, the corresponding components of at least one reference command. If at any time the combined components no longer match a reference command, the input analysis module 126 causes the radial menu module 124 to display the radial menu at the correct depth level. Thus, if the computer device 120 has already received two components of a multi-component input command, the radial menu is displayed with the extra option levels already visible based on the previously received components.
Similarly, if the user pauses and fails to enter another input component for a particular amount of time, the input analysis module 126 causes the radial menu to be displayed. In this way, a user can enter the components that the user is comfortable with, and if the user forgets the next step the input analysis module 126 will cause the radial menu to be displayed at the appropriate level.
In some example embodiments, the input analysis module 126 receives a component that completes a full multi-component command. In response, the input analysis module 126 then executes the command.
In some example embodiments, a third party server 150 stores user data 152. This user data 152 can incorporate any information about the user, including, but not limited to, user preferences, user history, user location, user demographic information, and command gesture data 132 for the user. In some example embodiments, the user can switch from one computer device 120 to a different computer device and import all the relevant user profile data from the user data 152 stored at the third party server 150. In this way, the user's reference multi-component command data will be available at the new device and the user's muscle memory can be utilized.
Memory 212 includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double data rate random-access memory (DDR RAM), or other random-access solid-state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 212 may optionally include one or more storage devices remotely located from the CPU(s) 202. Memory 212, or alternately, the non-volatile memory device(s) within memory 212, comprise(s) a non-transitory computer readable storage medium.
In some embodiments, memory 212 or the computer readable storage medium of memory 212 stores the following programs, modules, and data structures, or a subset thereof:
The radial menu 300 includes four wedges (wedges 302, 304, 306, and 308). Each wedge represents a group of menu items.
The user may further continue to navigate to select a menu item from the secondary level of menu items 310 to 316. In some example embodiments, a menu item is selected by placing a cursor over the menu item for a predetermined amount of time. According to one embodiment, the predetermined amount of time for hovering over a particular wedge or menu item is customizable by the user. For example, the amount of hovering time may be one half of a second (0.5 seconds). The hovering time may be the same for the various menus or may be set differently for each wedge or menu item to provide different hovering times for each wedge or menu item.
While
In some example embodiments, the computer device 120 provides a selection of a menu item on a radial menu with a single continuous user movement (e.g., gesture) via various user input devices (e.g., a mouse, a touchpad, a touch screen, or a spatial gesture). For instance, the movement can be based on dragging a mouse, a finger gesture across a touchpad or a touch screen, or a spatial hand gesture. In this case, a user does not need to read or comprehend the menu before selecting the menu item using a series of selections. Instead, the user can rely on muscle memory to perform a single motion. The present system interprets the movement to determine and actuate the menu item. According to one embodiment, a user (e.g., an expert user) may perform a single continuous movement without waiting for the radial menu to be displayed on a display screen.
The multi-component user input shown in
The multi-component user input shown in
The multi-component user input shown in
The multi-component user inputs for each of the three reference commands are stored by the computer device (e.g., the computer device 120 in FIG. 1) such that the computer device knows the direction, length, and time of each component in the multi-component user input.
In accordance with a determination that the first two components received (the input components 502 and 504) do not match any of the multi-component commands stored as reference commands, the input analysis module (e.g., the input analysis module 126 of
The input analysis module (e.g., the input analysis module 126 of
In some embodiments, the method 600 is performed at a computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
In accordance with a determination that the list of input components does not match any reference command, the computer device (e.g., the computer device 120 in
In accordance with a determination that the list of input components does match at least one reference command, the computer device (e.g., the computer device 120 in
In accordance with a determination that the multi-component input list does not correspond to a complete command, the computer device (e.g., the computer device 120 in
In some embodiments, the method is performed at a computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
In accordance with a determination that the command input is received within a predetermined time window, the computer device (e.g., the computer device 120 in
In some example embodiments, the computer device (e.g., the computer device 120 in
The computer device (e.g., the computer device 120 in
In some example embodiments, input components are finger gestures on a touch screen display. For example, input components may be tap gestures, hold gestures, tap and hold gestures, multi-finger gestures, swipe gestures, and so on.
In some example embodiments, each reference command in the list of reference commands includes one or more components. In some example embodiments, prior to receiving command input, the computer device receives a command initiation input.
The computer device (e.g., the computer device 120 in
In some embodiments, the method is performed at a computer device (e.g., the computer device 120 in
In accordance with a determination that the first input component matches a first input component of at least one reference command in the list of reference commands (716), the computer device (e.g., the computer device 120 in
In some example embodiments, the computer device (e.g., the computer device 120 in
In accordance with a determination that the amount of time that has passed since the last input component was received exceeds a predetermined amount of time (722), the computer device (e.g., the computer device 120 in
In some example embodiments, the computer device (e.g., the computer device 120 in
In accordance with a determination that the first component does not match a first component of at least one reference command in the list of reference commands, the computer device (e.g., the computer device 120 in
The operating system 802 may manage hardware resources and provide common services. The operating system 802 may include, for example, a kernel 820, services 822, and drivers 824. The kernel 820 may act as an abstraction layer between the hardware and the other software layers. For example, the kernel 820 may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services 822 may provide other common services for the other software layers. The drivers 824 may be responsible for controlling and/or interfacing with the underlying hardware. For instance, the drivers 824 may include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth.
The libraries 804 may provide a low-level common infrastructure that may be utilized by the applications 808. The libraries 804 may include system libraries (e.g., C standard library) 830 that may provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries 804 may include API libraries 832 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats, such as MPEG4, H.264, MP3, AAC, AMR, JPG, or PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries 804 may also include a wide variety of other libraries 834 to provide many other APIs to the applications 808.
The frameworks 806 may provide a high-level common infrastructure that may be utilized by the applications 808. For example, the frameworks 806 may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks 806 may provide a broad spectrum of other APIs that may be utilized by the applications 808, some of which may be specific to a particular operating system or platform.
The applications 808 include a home application 850, a contacts application 852, a browser application 854, a book reader application 856, a location application 858, a media application 860, a messaging application 862, a game application 864, and a broad assortment of other applications, such as a third party application 866. In a specific example, the third party application 866 (e.g., an application developed using the Android™ or iOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as iOS™, Android™, Windows® Phone, or other mobile operating systems. In this example, the third party application 866 may invoke the API calls 810 provided by the operating system 802 to facilitate functionality described herein.
The machine 900 may include processors 910, memory 930, and I/O components 950, which may be configured to communicate with each other via a bus 905. In an example embodiment, the processors 910 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 915 and a processor 920 that may execute instructions 925. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (also referred to as “cores”) that may execute instructions contemporaneously. Although
The memory 930 may include a main memory 918, a static memory 940, and a storage unit 945 accessible to the processors 910 via the bus 905. The storage unit 945 may include a machine-readable medium 947 on which are stored the instructions 925 embodying any one or more of the methodologies or functions described herein. The instructions 925 may also reside, completely or at least partially, within the main memory 918, within the static memory 940, within at least one of the processors 910 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 900. Accordingly, the main memory 918, the static memory 940, and the processors 910 may be considered machine-readable media 947.
As used herein, the term “memory” refers to a machine-readable medium 947 able to store data temporarily or permanently, and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 947 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions 925. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., the instructions 925) for execution by a machine (e.g., the machine 900), such that the instructions, when executed by one or more processors of the machine (e.g., the processors 910), cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory (e.g., flash memory), an optical medium, a magnetic medium, other non-volatile memory (e.g., Erasable Programmable Read-Only Memory (EPROM)), or any suitable combination thereof. The term “machine-readable medium” specifically excludes non-statutory signals per se.
The I/O components 950 may include a wide variety of components to receive input, provide and/or produce output, transmit information, exchange information, capture measurements, and so on. It will be appreciated that the I/O components 950 may include many other components that are not shown in
In further example embodiments, the I/O components 950 may include biometric components 956, motion components 958, environmental components 960, and/or position components 962, among a wide array of other components. For example, the biometric components 956 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, finger print identification, or electroencephalogram based identification), and the like. The motion components 958 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components 960 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), and/or other components that may provide indications, measurements, and/or signals corresponding to a surrounding physical environment. The position components 962 may include location sensor components (e.g., a Global Position System (GPS) receiver component), altitude sensor components (e.g., altimeters and/or barometers that detect air pressure, from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
Communication may be implemented using a wide variety of technologies. The I/O components 950 may include communication components 964 operable to couple the machine 900 to a network 980 and/or to devices 970 via a coupling 982 and a coupling 992 respectively. For example, the communication components 964 may include a network interface component or another suitable device to interface with the network 980. In further examples, communication components 964 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices 970 may be another machine and/or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)).
Moreover, the communication components 964 may detect identifiers and/or include components operable to detect identifiers. For example, the communication components 964 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF48, Ultra Code, UCC RSS-2D bar code, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), and so on. In additional, a variety of information may be derived via the communication components 964, such as location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
In various example embodiments, one or more portions of the network 980 may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network 980 or a portion of the network 980 may include a wireless or cellular network and the coupling 982 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling 982 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology.
The instructions 925 may be transmitted and/or received over the network 980 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 964) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 925 may be transmitted and/or received using a transmission medium via the coupling 992 (e.g., a peer-to-peer coupling) to the devices 970. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 925 for execution by the machine 900, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
Furthermore, the machine-readable medium 947 is non-transitory (in other words, not having any transitory signals) in that it does not embody a propagating signal. However, labeling the machine-readable medium 947 “non-transitory” should not be construed to mean that the medium is incapable of movement; the medium should be considered as being transportable from one physical location to another. Additionally, since the machine-readable medium 947 is tangible, the medium may be considered to be a machine-readable device.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed.
The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the possible embodiments to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles involved and their practical applications, to thereby enable others skilled in the art to best utilize the various embodiments with various modifications as are suited to the particular use contemplated.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a “first contact” could be termed a “second contact,” and, similarly, a “second contact” could be termed a “first contact,” without departing from the scope of the present embodiments. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if (a stated condition or event) is detected” may be construed to mean “upon determining (the stated condition or event)” or “in response to determining (the stated condition or event)” or “upon detecting (the stated condition or event)” or “in response to detecting (the stated condition or event),” depending on the context.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.