This description relates to human input devices.
Many different input devices and associated techniques have been developed for the purpose of enabling users to interact with computer applications and related hardware/software. For example, the mouse, the keyboard, the stylus, and many other such devices and related techniques have been long-used and are widely known to provide users with an ability to, e.g., input data, manipulate functionalities of software applications, and otherwise interact with such software applications and related computing platforms.
In related examples, touch screens have been developed which enable users to interact with software applications and related computing platforms in easy, intuitive manners, using finger-based interactions between the user and the related input device. For example, such touch screens may enable combinations or variations of finger motions, which are commonly referred to as gestures, and which are designed to result in specific, corresponding actions on the part of a corresponding application/operating system/platform. For example, such gestures may include a “pinching” motion using two fingers, to zoom out in a display setting, or, conversely, a “spreading” motion of expanding two fingers apart from one another to zoom in in such display settings. Further, devices exist which enable users to interact with software without requiring a touchscreen, and while still including gesture-based interactions. For example, some devices include accelerometers, gyroscopes, and/or other motion-sensing or motion-related technologies to detect user motions in a three-dimensional space, and to translate such motions into software commands. Somewhat similarly, techniques exist for detecting user body movements, and for translating such body movements into software commands.
As referenced above, such input devices and related techniques may be well-suited to provide their intended functions in the specific context(s) in which they were created, and in which they have been developed and implemented. However, outside of these specific contexts, such input devices and related techniques may be highly limited in their ability to provide a desired function and/or obtain an intended result.
For example, many such input devices and related techniques developed at a particular time and/or for a particular computing platform may be unsuitable for use in a different context than the context in which they were developed. Moreover, many such input devices and related techniques may be highly proprietary, and/or may otherwise be difficult to configure or customize across multiple applications. Still further, such input devices and related techniques, for the above and related reasons, may be difficult or impossible to use in a collaborative fashion (e.g., between two or more collaborating users).
As a result, users may be unable to interact with computer applications and related platforms, and/or with one another, in a desired fashion. Consequently, an enjoyment and productivity of such users may be limited, and full benefits of the computer applications and related platforms may fail to be realized.
According to one general aspect, a computer system may include instructions recorded on a computer-readable storage medium and readable by at least one processor. The system may include an input handler configured to cause the at least one processor to receive first human input events from at least one human input device and from at least one user, associate the first human input events with a first identifier, receive second human input events from the at least one human input device from the at least one user, and associate the second human input events with a second identifier. The system may include a command instructor configured to cause the at least one processor to relate the first human input events and the second human input events to commands of at least one application, and instruct the at least one application to execute the commands including correlating each executed command with the first identifier or the second identifier.
According to another general aspect, a computer-implemented method for causing at least one processor to execute instructions recorded on a computer-readable storage medium may include receiving first human input events from at least one human input device and from at least one user, associating the first human input events with a first identifier, and receiving second human input events from the at least one human input device from the at least one user. The method may further include associating the second human input events with a second identifier, relating the first human input events and the second human input events to commands of at least one application, and instructing the at least one application to execute the commands including correlating each executed command with the first identifier or the second identifier.
According to another general aspect, a computer program product may be tangibly embodied on a computer-readable medium and may comprise instructions that, when executed, are configured to cause at least one processor to receive first human input events from at least one human input device and from at least one user, associate the first human input events with a first identifier, receive second human input events from the at least one human input device from the at least one user, and associate the second human input events with a second identifier. The computer-implemented method may further relate the first human input events and the second human input events to commands of at least one application, and instruct the at least one application to execute the commands including correlating each executed command with the first identifier or the second identifier.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
More specifically, as described in detail below, the framework 102 may be configured to capture and/or otherwise utilize raw input data received from the at least one human input device 104 and representing input actions of the at least one user 106. The framework 102 may thereafter relate such received data to native functionalities of the application 108, so as to thereby provide resulting executions of application commands, as represented by executed commands 110, 112 of
Accordingly, the framework 102 may enable the at least one user 106 to obtain the executed commands 110, 112, in a manner which may not otherwise be supported or provided by the at least one human input device 104 and/or the at least one application 108. Moreover, as also described in detail below, the framework 102 may enable the at least one user 106 to configure and otherwise customize such functionalities in a desired fashion, and in a manner which is flexible and convenient.
In the example of
For example, as may be understood from the above description, the at least one human input device 104 may represent a multi-touch device (e.g., touch screen) which is designed to capture finger movements (and combinations thereof) of the at least one user 106. More specifically, for example, such multi-touch devices may include a capacitive touch screen which detects the various finger movements and combinations thereof, and captures such movements/combinations as positional data defining movements of the user's fingers (or other body parts or pointing elements) in a two dimensional plane defined by the capacitive touch screen. In other words, the resulting raw data captured by the at least one human input device 104 may provide (X, Y) coordinates of the finger movements/combinations (or X, Y, Z coordinates in the case of 3-dimensional devices), with respect to a frame of referenced defined by the at least one human input device 104 (i.e., the touch screen) itself.
It will be appreciated that many other types and variations of human input devices may be used, as well. For example, the touchscreens just referenced may be configured to interact with other parts of the user besides the user's fingers. Moreover, motion-sensing devices may be used to detect/track user movements/gestures or other actions. Still further, some devices may use appropriate optical techniques to track current body movements of a user, irrespective of any device (or lack thereof) being held, touched, or otherwise directly accessed by the user. In these and various other implementations, detected body movements may be mapped to corresponding gestures and ultimately to corresponding application commands and associated functions/actions.
During normal or conventional operations of these and other human input devices, captured raw data may be encapsulated for transmission and/or storage, using a standard and/or proprietary protocol. More particularly, for example, the resulting encapsulated data may conventionally be processed in the context of an operating system used to implement one or more applications designed to interact with the conventional multi-touch touch screen and/or other human input device(s), such as those just referenced. For example, in such conventional contexts, such human input devices may be utilized to control movement of a cursor within and across a desktop environment and/or multiple applications executing in the same operating context as the desktop environment.
In the example of the system 100 of
Although the just-provided examples discuss implementations of the at least one human input device 104 including one or more multi-touch interactive touch screens, motion-sensing devices, or touchless interactive devices, or other human input devices (or combinations thereof), it may be appreciated that the at least one human input device 104 may represent virtually any such device designed to capture movements or other actions of the at least one user 106. For example, a touch pad or other touch-based device may be utilized as the at least one human input device 104. In still other examples, the at least one human input device 104 may represent hardware and/or software used to capture spoken words of the at least one user 106, and to perform voice recognition thereon in order to obtain raw data representing the spoken words of the at least one user 106. Still further, as referenced above, the at least one human input device 104 may represent a device designed to track three dimensional movements of the at least one user within a space surrounding at least one user 106. Of course, the at least one human input device 104 also may represent various other conventional input devices, such as, e.g., a mouse, a keyboard, a stylus, or virtually any other known or not-known human input device.
Nonetheless, for the sake of simplicity and conciseness of explanation, the following description is provided primarily in the context of implementations of the at least one human input device 104 which include a multi-touch interaction screen/surface. Consequently, the executed commands 110, 112 of the at least one application 108 are illustrated as being displayed within the context of a graphical user interface (GUI) 114 provided on a display 116 (which may itself represent virtually any known or not-yet known display, including, e.g., a LCD or LED-based monitor of a laptop, netbook, notebook, tablet, or desktop computer, and/or a display of a Smartphone or other mobile computing device). Thus, in this context, the GUI 114 may be understood to represent, e.g., a proprietary or customized user interface of the at least one application 108, or, in other example embodiments, a more general or generic user interface, such as a conventional web browser.
Meanwhile, the at least one application 108 may be understood to represent, for example, an application running locally to at least one computing device 132 associated with the display 116 (as described in more detail below). In additional or alternative examples, the at least one application 108 may be understood to represent a remote application, e.g., a web-based or cloud application which is accessed by the at least one user 106 over the public internet or other appropriate local or wide area network, (e.g., a corporate intranet).
Thus, although operations of any and all such applications may be understood to be represented and displayed within a graphical user interface such as the GUI 114 of the display 116, it may nonetheless be appreciated that the executed commands 110, 112 may be provided in other appropriate contexts, as well. For example, the at least one application 108 may be configured, at least in part, to provide audible, haptic, or other types of output, in addition to, or as an alternative to, the type of visual output provided within the display 116. For example, the executed command 110 may include or represent an audible sound provided within the context of the GUI 114 by the at least one application 108. In other examples, the executed command 110 may represent a haptic output (e.g., a vibration) of a controller device used by the at least one user 106 (including, possibly, the at least one human input device 104 as such a controller).
In the example of
To give specific, non-limiting examples, the at least one human input device 104 may be configured to communicate using Bluetooth or other wireless communication techniques with connected computing devices (e.g., with operating systems thereof). In such contexts, then, the interface 118 may represent hardware and/or software designed to intercept or otherwise obtain such wireless communications, to thereby obtain the raw data related to the human input events received from the at least one user 106, for use thereof by the framework 102. For example, the interface 118 may represent, or be included in, a hardware dongle which may be connected to an appropriate port of the at least one computing device 132.
In other specific examples, it may occur that the at least one human input device 104 is connected to the at least one computing device 132 by a wired connection (e.g., by way of a universal serial bus (USB) connection). In such contexts, the interface 118 may represent, or be included in or provided in conjunction with, an appropriate device driver for the at least one human input device 104 executing in the context of the at least one computing device 132.
Thus, in these and other example scenarios, the interface 118 illustrates that the framework 102 may be configured to interact and communicate with the at least one human input device 104, or at least to receive and utilize communications therefrom. In some example implementations, the at least one human input device 104 may continue any standard or conventional communications with any computing devices connected thereto, including, potentially, the at least one computing device 132 itself, in parallel with the above-referenced communications of the at least one computing device 132 with the interface 118 of the framework 102. In other example embodiments, the communications of the at least one human input device 104 with the interface 118 may preempt or supersede any standard or conventional communications of the at least one human input device 104 (e.g., the interface 118 may be configured to block any such communications which may be undesired by the at least one user 106 and/or which may interfere with operations of the framework 102).
Thus, by way of the interface 118, an input handler 120 of the framework 102 may be configured to receive the output of the at least one human input device 104. For example, in the specific examples referenced above, the input handler 120 may receive Bluetooth or other wireless packets transmitted by the at least one human input device 104. Similarly, the input handler 120 may be configured to receive corresponding packets received via a hardwired connection with the interface 118, as would be appropriate, depending upon a nature and type of such a connection. Consequently, it may be appreciated that the input handler 120 is extensible to a variety of different devices, including devices not specifically mentioned here and/or future devices.
In example scenarios in which a plurality of distinguishable streams of raw data representing separate sets of human input events are received at the input handler 120, the input handler 120 may be configured to assign corresponding identifiers to the separate/distinguishable streams of human input events. For example, as described in more detail below, and as referenced above, the at least one human input device 104 may represent two or more such human input devices. For example, a plurality of such human input devices may be used by a single user (e.g., using a left and right hand of the user), and/or using two or more users, with each of the two or more users utilizing a corresponding human input device of the at least one human input device 104. Still further, it may occur that a plurality of users of the at least one user 106 wish to collaborate using the at least one application 108.
In these and other example scenarios, some of which are described in more detail below, the input handler 120 may be configured to assign a unique identifier to data received from the corresponding stream of human input events. For example, in various ones of the above examples, an individual identifier may be assigned to each of two or more users of the at least one user 106. Similarly, such an identifier may be assigned to the raw data representing two or more interactive touch surfaces (and/or representing a defined subset or subsets of such interactive touch surfaces).
A data extractor 122 may be configured to extract the raw data from such packets, and/or to otherwise obtain such raw data as included within the standard and/or proprietary protocol used by the at least one human input device 104 for outputting human events received from the at least one user 106. As referenced above, such standard or conventional communications of the at least one human input device 104 utilizing a standard transmission protocol to encapsulate or otherwise transform raw data captured from the at least one user 106 may be configured to enable execution of the at least one human input device 104 as part of a process flow of an operating system of the at least one computing device 132. In other words, standard or conventional communications received from the at least one human input device 104 may already be configured to be processed by the operating system of the at least one computing device 132 across a plurality of supported applications. However, as also described in reference to the above, such communications and associated processing, by themselves, may limit a flexibility and configurability of the at least one human input device 104, particularly across two or more applications and/or in the context of collaborations among two or more users, and/or across two or more of the (same or different implementations of) the at least one human input device 104.
To give but a few specific examples, it may occur the at least one application 108 was designed for use with a keyboard and mouse, and may have limited native functionality or ability to interact with a multi-touch surface as the at least one human input device 104 (e.g., may not recognize gestures). Conversely, the at least one application 108 may have been designed specifically for use in the context of such multi-touch interactive touch screens, and consequently may not be fully functional within a context in which only a keyboard or mouse are available as the at least one human input device 104.
Moreover, even when the at least one application 108 is, in fact, designed for use with a multi-touch interactive touch screen (or other desired type of human input device), the above-referenced and other conventional/standard implementations may be insufficient or otherwise unsatisfactory. For example, it may be observed in conventional settings that it may be difficult or impossible to enable multiple human input devices 104 in the context of an application such as the at least one application 108. For example, plugging two mouse devices and/or a mouse and touch pad into a single computing device and associated operating system typically results in a preempting of the desired functionality by only a single one of the two or more attached human input devices at a given time, (for example, two mouse devices plugged into a single computer typically results in cursor control by only one of the devices, at least at a given time).
In contrast, as described herein, the framework 102 may utilize raw data captured by the at least one human input device 104 (as received by way of the interface 118, the input handler 120, and the data extractor 122, if necessary). Thus, the framework 102 may be configured to instruct the at least one application 108 to provide the desired executed commands 110, 112, in a manner which is independently configurable across a plurality of applications represented by the at least one application 108.
For example, in the example of
Of course, such gestures may include some standard gestures known to be associated with multi-touch interactive touch surfaces (e.g., including a “pinch” gesture and/or a “spread” gesture, which may be used to zoom out and zoom in, respectively, in the context of visual displays). However, as already described and referenced above, such standard usages of gestures may typically already be encapsulated and represented in a potentially proprietary format and in a manner designed to interact with an operating system of the at least one computing device 132.
Consequently, since the framework 102 utilizes raw data captured by the at least one human input device 104 and extracted by the data extractor 122, it may be necessary or desirable for the framework 102 to independently obtain or otherwise characterize corresponding gestures. Moreover, as described in more detail below, the framework 102 may be configured to provide user-selectable gesture mappings, which may not otherwise be available in standard or conventional uses of the at least one human input device 104.
Using the resulting gestures, a command instructor 126 may be configured to instruct the at least one application 108 to provide the desired, executed commands 110, 112 corresponding to the finger motions/actions of the at least one user 106 at the at least one human input device 104. For example, in the example of
Of course, such native commands may vary considerably depending on a type and nature of the at least one application 108. For example, the at least one application 108 may include a map application designed to provide geographical maps. In such contexts, the command 108b may include various map-related commands for interacting with the location map provided by the at least one application 108. In other example context, the at least one application 108 may include a video game, in which case the various command 108b may be configured for specific actions of characters within the video game in question.
Thus, in
As referenced above with respect to the input handler 120, it may occur that the raw data obtained from the at least one human input device 104 may include a plurality of data streams representing corresponding sets of human input events. In such scenarios, the command instructor 126 may be configured to instruct the at least one application 108 (e.g., using the API 108a in
Consequently, since the command instructor 126 may be provided with gestures from the gesture mapper 124 in a manner which detects the associated identifiers provided therewith, the command instructor 126 may instruct the at least one application 108 to provide the executed commands 110, 112 in a manner which also reflects the corresponding identifiers. For example, in a simple scenario, it may occur that the at least one user 106 represents two collaborating users, using the at least one human input device 104. In such scenarios, the executed command 110 may represent a command desired by a first user, while the executed command 112 may represent a command desired by the second user.
In such scenarios, the command instructor 126 may further instruct the at least one application 108 to display the executed commands 110, 112 in a manner which reflects the correspondence thereof to the associated identifiers, and thus to the users in question. Specifically, for example, the first user may be associated with a first color, while the second user may be associated with the second color, so that the executed commands 110, 112 may be provided in the colors which correspond to the pair of hypothetical users.
Additional examples of manners in which the assigned identifiers may be utilized by the command instructor 126 are provided below, e.g., with respect to
As referenced above with respect to the gesture mapper 124, the command instructor 126 may be flexibly configured so as to provide specific instructions to the at least one application 108 in a manner desired by the at least one user 106 (or other user or administrator). Specifically, as shown, a configuration manager 128 of, or associated with, the framework 102 may be accessible by the at least one user 106 or other user/administrator. Thus, the configuration manager 128 may be configured to store, update, and otherwise manage configuration data 130 which may be utilized to configure a manner and extent to which the raw data from the data extractor 122 is mapped to specific gestures by the gesture mapper 124, and/or a manner in which the command instructor 126 translates specifying gestures received from the gesture mapper 124 for instruct based thereon to the at least one application 108 to provide the executed commands 110, 112 as instances of corresponding ones of the command 108B.
Of course, such configuration options may extend across the various types of human input devices which are compatible with the system 100, as referenced above. For example, it may occur that the at least one human input device 104 outputs raw data, so that the input handler 120 may directly output such raw data to the gesture mapper 124, so that, in these examples, the data extractor 122 is not required.
Somewhat similarly, it is referenced above that the example of
Although the above explanation illustrates a manner(s) in which conventional human input devices execute as part of a process flow of an operating system, and provides explanation and discussion with respect to interactions with the at least one application 108 that are independent of an operating system, it may be appreciated that the framework 102 also may, if desired, interact with an operating system of the device 132. For example, the command instructor 126 may instruct the at least one application 108 including an operating system thereof, to thereby provide the desired executed command 110. Thus, it may be observed that the framework 102 may be configured to interact with the operating system of the at least one computing device 132, if desired, but that such operations may themselves be independent or separable from command instructions provided to other applications and/or operating systems.
As shown in
Thus, for example, it may be appreciated that two or more subsets of components of the framework 102 may be executed using two or more computing devices of the at least one computing device 132. For example, portions of the framework 102 may be implemented locally to the at least one user 106, while other portions and components of the framework 102 may be implemented remotely, e.g., at a corresponding web server.
Somewhat similarly, it may be appreciated that any two or more of the components of the framework 102 may be combined for execution as a single component. Conversely, any single component of the framework 102 may be executed using two or more subcomponents thereof. For example, the interface 118 may be implemented on a different machine than the remaining components of the framework 102. In other implementations, the interface 118 and the input handler 120 may be executed on a different machine than the data extractor 122, the gesture mapper 124, the command instructor 126, and the configuration manager 128. More generally, similar comments apply to remaining components (122-126) as well, so that, if desired, a completely distributed environment may be implemented.
In the example of
The first human input events may be associated with a first identifier (204). For example, the input handler 120 may be configured to associate a first identifier with all input events received from a first user of the at least one user 106, and/or from a first human input device (or defined portion or aspect thereof) of the at least one input device 104.
Similarly, second human input events may be received from the at least one human input device and from the at least one user (206), and the second human input event may be associated with a second identifier (208). For example, the input handler 120 may receive raw data from the at least one human input device 104 that is associated with the second user of the at least one user 106, and may associate a second identifier therewith. In other example embodiments, the first and second human input events may be received from a single user, e.g., such as when the input events are received from a left hand and a right hand of the same user (e.g., such as when each hand is used in conjunction with a separate input device), or from a finger of the user in conjunction with voice-recognized input events and/or 3-dimensional movements of the user. Further, although the examples just given refer to reception of raw data by the input handler 120, it may be appreciated that, as described above, the input handler 120 may receive encapsulated data from the at least one human input device 104 by way of the interface 118, and may require the data extractor 122 to extract the raw data from its encapsulation within packets defined by a relevant transmission protocol.
Further with respect to
That is, for example, in examples described herein in which the at least one input device 104 includes a multi-touch surface, the received human input events may initially be mapped to specific gestures, where such mappings also may be configured utilizing the configuration manager 128. Then, defined gestures may be related by the command instructor 126 to corresponding, individual ones of the command 108b. However, in other example implementations, such definitions and related mappings of gestures may not be relevant. For example, where the at least one human input device 104 includes voice recognition, recognized words or phrases from the at least one user 106 may be related directly to individual, corresponding ones of the commands 108b, without need for an immediate gesture mapping by the gesture mapper 124.
The at least one application may be instructed to execute the commands including correlating each executed command with the first identifier or the second identifier (212). For example, the command instructor 126 may be configured to communicate with the API 108a of the at least one application 108, as in
For example, as described in more detail herein, in the example of
In the example of
In other examples, as also described, the at least one user 106 may utilize the configuration manager 128 to modify, create, or otherwise provide the configuration data 130. For example, the configuration manager 128 may provide a graphical user interface (not explicitly illustrated in the example of
Similarly, the configuration manager 128 may utilize the above-referenced graphical user interface to provide a list of potential gestures, so that the at least one user 108 may input specific finger motions or other actions and then correlate such actions with specific ones of the provided gestures, or with newly defined and named gestures provided by the at least one user 106. For example, the configuration manager 128 may include a recording function which is configured to detect and store specified finger motions of the at least one user 106, whereupon the at least one user 106 may relate the recorded finger motion to specific existing or new gestures. Although the examples of
Further, as may be appreciated from the above description, the resulting gestures may be related to specific ones of the commands 108b (or configurable subsets thereof). For example, the commands 108b may include keyboard shortcuts or other specific functions of the application 108, so that the at least one user 106 may simply highlight or otherwise select such commands in conjunction with highlighting or otherwise selecting specific gestures which the user wishes to relate to the selected commands (or vice-versa).
Thus, in the example of
Packets may be captured from the devices in parallel, and raw data may be extracted therefrom (304) representing the various human input events associated with usage of the multiple devices. For example, the input handler 120 may capture Bluetooth packets, or packets formatted according to any relevant protocol in use, so that the data extractor 122 may proceed with the extraction of the raw data from any encapsulation or other formatting thereof which may be used by the relevant protocol. As described, such data capture and extraction may proceed with respect to the multiple input devices in parallel, so that, for example, two or more of the at least one user 106 may collaborate with one another or otherwise utilize and interact with the at least one application within the same or overlapping time periods.
Identifiers may be assigned to the captured raw data (306). For example, the input handler 120 may utilize an identifier which is uniquely associated with a corresponding one of the multiple input devices. Then, any communications received from the corresponding input device may be associated with the pre-designated identifier. In some examples, the identifier may be associated with the incoming data at a time of receipt thereof, and/or may not be associated with the raw data until after the raw data is extracted by the data extractor 122.
The raw data may be sorted, e.g., by the associated identifier, by time of receipt, and/or by spatial context (308). For example, the input handler 120 and/or the data extractor 122 may be provided with overlapping input events received as part of corresponding streams of human input events. To give a simplified example, it may occur that human input events from two users may alternate with respect to one another, so that the input handler 120 and/or the data extractor 122 may be provided with an alternating sequence of human input events, whereupon the input handler 120 and/or the data extractor 122 may be configured to separate the alternating sequence into two distinct data streams corresponding to inputs of the two users.
Similarly, various other criteria and associated techniques may be utilized to sort the received data. For example, as referenced, data received at a certain time or within a certain time window, and/or data received from within a certain area of one or more of the connected multi-touch interactive touch surfaces may be associated with particular input data streams, or subsets thereof. For example, a first user may be associated with a certain role and/or usage rights within a particular time window (e.g., may be designated in a presenter role at the beginning of a presentation). At the end of the designated time window, the same user may be provided with different access rights. Thus, in these and other examples, the gesture mapper 124 may be configured to process received data streams of human input events in a desired and highly configurable fashion.
Gestures may be recognized from the sorted raw data (310). For example, the configuration data 130 may be consulted by the gesture mapper 124 in conjunction with the received human input events, to thereby determine corresponding gestures.
The recognized gestures may thereafter be related to application commands, based on the identifiers (312). For example, the command instructor 126 may receive the recognized gestures, and may again consult the configuration data 130 to relate the recognized gestures to corresponding ones of the commands 108b. As described herein, the identifier associated with a particular data stream may dictate, to some degree, the determined relationship between the recognized gestures and the command 108b.
For example, as described, the first user may be associated with a first role and associated access/usage rights, so that gestures recognized as being received from the first user may (or may not) be related to the commands 108b differently than gestures recognized as being received from a second user. For example, the first user may have insufficient access rights to cause execution of a particular command of the commands 108b, so that the gesture mapper 124 and/or the command instructor 126 may relate a recognized gesture from the first user with a command stating “access denied,” while the same recognized gesture for the second user may result in the desired command execution on the part of the application 108. Additional examples are provided below with respect to
The application may be instructed to execute the related commands, and to display a correlation of corresponding identifiers therewith (314). For example, the command instructor 126 may be configured to instruct the application 108, via the API 108a, to execute corresponding commands of the commands 108b, and thereby obtain the executed commands 110, 112. In the example, the executed command 110 may be entered or requested by the first user, while the executed command 112 may be requested by the second user. In the example, the executed commands 110, 112 may therefore be displayed within the GUI 114 in different colors, or may otherwise be visually, audibly, or otherwise provided in a manner which demonstrates correspondence of the executed commands 110, 112 to the corresponding identifiers and associated input devices/users. In other examples, as described with respect to
Of course, if desired, the framework 102 may be configured to provide the executed commands 110, 112 from corresponding first and second users in an identical manner to one another, so that the executed commands 110, 112 may be indistinguishable from the perspective of an observing party. Nonetheless, it may be appreciated that even in such scenarios, the framework 102 may retain data associated in the executed commands 110, 112 with corresponding identifiers/devices/users, e.g., for display thereof upon request of one or both of the users, or by any other authorized party.
However, in the example of
In
In contrast,
In the example of
Meanwhile,
That is, as referenced above with respect to
A data extractor 122a may be configured to sort the data based on the assigned user IDs. The gesture mapper 124a may determine whether a given user is authorized to execute the received gesture. If not, corresponding notice may be provided to the user. If so, then the gestures X, Y may be passed to the command instructor 126a, which may determine whether the each user is permitted to perform the related command. If so, then the command instructor 126a may provide commands A (for gesture X) and B (for gesture Y) to the application 410.
In
Thus, as referenced above,
Many other implementations are possible. For example, the system 100 of
Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the embodiments.