Inputting text to a computing device without using a physical keyboard or a soft keyboard (e.g., where keys on a touch-sensitive display can be selected) can be challenging. For example, relatively recently, accessory devices for televisions, such as video game consoles, set top boxes, media streaming devices, and the like, have been configured to receive textual input and perform a processing operation based upon such textual input. In an example, an accessory device that streams media can receive a textual query, perform a search over available media based upon the query, and output search results located during the search.
To provide such a query, however, a user typically employs a control device, such as a remote control, a video game controller, or the like, and selects characters one at a time by scrolling through a menu. Thus, if a user desires to set forth the query “movies,” the user individually selects each character from a list of characters presented on the display screen. While this may not be problematic for a relatively small amount of text, provision of a sequence of words may require a significant amount of time, causing the user frustration and decreasing usability of the accessory. Some accessories have been configured to receive and recognize voice input from the user. In noisy environments, however, such voice recognition may be suboptimal. In other examples, conventional remote controls are configured with a plurality of buttons, where each button represents multiple characters. The user can select a particular character by tapping a button an appropriate number of times. Again, however, provision of a relatively long sequence of characters can require pressing several buttons, wherein at least some of such buttons must be pressed numerous times.
Furthermore, accessory devices to televisions have been configured to transmit messages to and receive messages from other computing devices. Users are unlikely to employ a messaging application, however, if entrance of characters takes a relatively large amount of time or is somewhat cumbersome.
The following is a brief summary of subject matter that is described in greater detail herein. This summary is not intended to be limiting as to the scope of the claims.
Described herein are various technologies pertaining to identifying a word that is desirably set forth by a user through recognition of a continuous trace set forth by the user in the air. In an example, a user may be viewing a television screen and may be, therefore, displaced from such television screen. A sensor is configured to capture movement of at least one portion of a body of the user, wherein the portion of the body of the user, for example, may be an arm, a hand, a finger, a head, or the like. The user can move the portion of her body to form a continuous trace. For instance, the user may extend her arm towards the display screen and pivot her arm to form a continuous trace, wherein the continuous trace may be in a user-defined plane (e.g., which is substantially parallel to the display screen). This continuous trace is analogous to a user setting forth strokes over a canvas. A word or words may correspond to the continuous trace, and such word or words can be recognized based at least in part upon the continuous trace. Accordingly, a user can enter text by way of gestures made in the air.
In an exemplary embodiment, a keyboard can be presented on the display screen, wherein the keyboard can be invoked responsive to an invocation gesture. For example, various sensors can monitor action of a user, and an invocation gesture can be identified based upon data output by such sensors. Accordingly, an invocation gesture may be the user positioning herself at a particular location, the user making a gesture with her hand, the user setting forth a voice command, etc. Responsive to detecting the invocation gesture, a keyboard can be presented on the display screen, wherein the keyboard comprises a plurality of character keys, each character key being representative of at least one respective character. In an exemplary embodiment, a user can define size of the keyboard based upon at least one gesture. For instance, the user may draw a rectangle in the air, and the keyboard can be displayed on the display screen in accordance with the size of the rectangle drawn by the user. In another embodiment, the keyboard can be displayed at a standard size.
The user may then move the portion of her body relative to the keyboard, and can employ a continuous sequence of gestures to generate text. In a non-limiting example, the user may desire to set forth the text “hello.” The user can point her finger at a key on the keyboard that is representative of the letter “h,” and may thereafter move her arm, hand, and/or finger to form a continuous trace that passes over keys in the keyboard that are representative of the characters “e,” “l,” and “o.” In an example, graphical data can be displayed on the display screen that provides feedback to the user regarding the location of her continuous trace over the keyboard. The continuous trace can then be decoded, such that the word “hello” is identified as being desirably set forth by the user. At least one processing function can be undertaken responsive to the word being identified including, but not limited to, display of the word to the user, provision of the word to a computer-executable application, transmittal of the word as a portion of a message to another computing device, etc.
The above summary presents a simplified summary in order to provide a basic understanding of some aspects of the systems and/or methods discussed herein. This summary is not an extensive overview of the systems and/or methods discussed herein. It is not intended to identify key/critical elements or to delineate the scope of such systems and/or methods. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
Various technologies pertaining to identifying continuous traces undertaken relative to keys of a keyboard and recognizing words based upon such continuous traces are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more aspects. Further, it is to be understood that functionality that is described as being carried out by certain system components may be performed by multiple components. Similarly, for instance, a component may be configured to perform functionality that is described as being carried out by multiple components.
Moreover, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.
Further, as used herein, the terms “component” and “system” are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices. Further, as used herein, the term “exemplary” is intended to mean serving as an illustration or example of something, and is not intended to indicate a preference.
With reference now to
In the example shown in
In the example shown in
In an exemplary embodiment, the user 102 may wish to generate text for provision to an application, transmittal to a contact of the user 102, to perform a search, etc. As will be described in greater detail herein, the user 102 can invoke the keyboard 108 by performing a predefined action, which can cause the keyboard 108 to be displayed on the display screen 104. Thereafter, the user 102 can move a particular portion of her body relative to keys on the keyboard 108 that are representative of characters included in a word desirably set forth by the user 102. For example, if the user 102 wishes to set forth the word “hello”, the user 102 can move her arm/hand to form a continuous trace that connects a key that is representative of the letter “h” to a key that is representative of the character “e,” from the key that is representative of the character “e” to a key that is representative of the character “l,” and from the key that is representative of the character “l” to a key that is representative of the character “o.” It is to be understood that the continuous trace 110 may pass over other keys that are representative of characters not included in the word desirably set forth by the user 102. The continuous trace 110, however, can be decoded to decipher the word that is desirably set forth by the user 102, and such word can be displayed on the display screen 104.
Pursuant to an example, visual feedback can be provided to the user 102, wherein a graphical trail is shown over the keyboard 108 that is representative of the continuous trace 110 performed by the user 102. In summary then, the user 102 can perform natural, continuous gestures in the air, and words desirably set forth by the user 102 can be determined based upon such natural gestures.
With reference now to
The system 200 further includes an invocation recognizer component 204 that is in communication with the receiver component 202. The invocation recognizer component 204 can recognize an invocation command set forth by the user 102 based upon data output by the sensor 106. The user 102 can set forth such invocation command when she desires to generate text. The invocation recognizer component 204 can be configured to recognize at least one of a variety of different types of invocation commands. For instance, the invocation recognizer component 204 can be configured to recognize a spoken gesture set forth by the user 102, which indicates that the user 102 desires to set forth text. In another example, the invocation recognizer component 204 can recognize positioning of a body of the user 102 in a certain region relative to the sensor 106 as an invocation command. Still further, the invocation recognizer component 204 can recognize a particular gesture set forth by the user 102 as the invocation command. Exemplary types of invocation commands that can be recognized by the invocation recognizer component 204 are set forth below.
The system 200 also includes a display component 206 that is in communication with the invocation recognizer component 204. The display component 206 causes a keyboard to be displayed on the display screen 104 responsive to the invocation recognizer component 204 recognizing an invocation command set forth by the user 102. In an exemplary embodiment, the display component 206 can display the keyboard with a size and/or at a position on the display screen 104 based upon the invocation command determined by the invocation recognizer component 204.
Once the user 102 sees the keyboard on the display screen 104, the user 102 can set forth a continuous trace, which is a movement of at least a portion of the body of the user 102 relative to the keyboard shown on the display screen 104. In an exemplary embodiment, the keyboard shown by the display component 206 includes a plurality of character keys, wherein each character key is representative of a single respective letter. Such keyboard may appear similar to what is shown on a conventional physical keyboard. In another example, the keyboard shown by the display component 206 may be a compressed keyboard that includes a plurality of character keys, wherein each character key is representative of a respective plurality of characters. Thus, for instance, a first key may be representative of the characters, “Q,” “W,” and “E,” while a second key may be representative of characters “R,” “T,” and “Y.” The keyboard may also include other keys, including a “Spacebar” key, an “Enter” key, a numerical keyboard, etc.
The system 200 further comprises a trace identifier component 208 this is in communication with the receiver component 202, wherein the trace identifier component 208 identifies a continuous trace set forth by the user 102 based upon the movement of the portion of the body of the user 102 captured in the data output by the sensor 106. Thus, for example, the user 102 can move her hand in a continuous manner relative to keys of the keyboard shown on the display screen 104, and such continuous trace can be recognized by the trace identifier component 208. Additionally, to assist the user 102 in setting forth the continuous trace over appropriate keys of the keyboard, the display component 206 can provide visual feedback to the user 102 in the form of a graphical trail, which depicts the continuous trace over the keyboard. Thus, for example, the user 102 can initially position the portion of her body to correspond to first a key on the keyboard, the first key representing a first character in a word desirably set forth by the user 102. The user 102 can then move the portion of her body, and the display component 206 can graphically display the continuous trace set forth by the user 102 on the display screen 104, such that the user 102 can see which keys of the keyboard are being passed over when the user 102 is performing the continuous trace.
The trace identifier component 208 can be configured to identify beginning and ending points of a continuous trace set forth by the user 102. In an exemplary embodiment, the trace identifier component 208 can detect a gesture set forth by the user 102 that indicates that the continuous trace has started and/or stopped. For instance, the user 102 can open her hand when setting forth the continuous trace and may close her hand in a first when the continuous trace is completed. The trace identifier component 208 can recognize such gesture, such that the beginning and ending points of the continuous trace can be identified. In another example, the trace identifier component 208 can recognize voice commands set forth by the user 102 that indicates the start and/or stop of a continuous trace. In still yet another example, the user 102 can employ a first portion of her body to perform the continuous trace and may use a second portion of her body to indicate the start and/or stop of the continuous trace. For instance, the user 102 can use her right hand to perform the continuous trace and can use a gesture with her left hand to identify when the continuous trace is to start and/or stop.
Further, in another exemplary embodiment, the trace identifier component 208 can identify a continuous trace set forth by the user 102 based upon an entity to which the user 102 is pointing. In other words, the continuous trace is defined by the entity to which the user 102 is pointing instead of or in addition to the movement of the portion of the body of the user 102.
The system 200 further comprises a decoder component 210 that receives the trace identified by the trace identifier component 208 and decodes such trace to identify a word that is desirably set forth by the user 102. In an exemplary embodiment, the decoder component 210 can comprise a statistical decoder that probabilistically selects a word based upon the continuous trace set forth by the user 102. For instance, a continuous trace set forth by the user 102 can be converted to her intended word or sequence of words, wherein the statistical decoder takes into account both how likely it is that those strokes were produced by a user intending such words (e.g., how well the strokes match the intended word), and how likely those words are, in fact, the words intended by the user (e.g., “chewing gum” is more likely than “chewing gun”).
A plurality of applications 212-214 can be in communication with the system 200. Such applications 212-214 may include, for example, a word processing application, a text messaging application, a search application (that receives a word or set of words set forth by the user 102 and performs or executes a search over contents of a data repository based upon such word(s)). The system 200 can additionally comprise an output component 216 that outputs a word output by the decoder component 210 to at least one of the applications 212-214. Additionally, the display component 206 can cause a word output by the decoder component 210 to be displayed on the display screen 104, wherein the user 102 can confirm that the decoder component 210 has correctly decoded the continuous trace or can indicate that the decoder component 210 has incorrectly decoded the continuous trace.
The system 200 can further comprise a feedback component 218 that provides the user 102 with additional feedback pertaining to operation of the decoder component 210 and/or the trace identifier component 208. For example, the feedback component 218 can cause a speaker (not shown) to output audio data that is indicative of aspects of the continuous trace identified by the trace identifier component 208. For example, the feedback component 218 can output data that is indicative of a velocity of movement of the portion of the body of the user 102, acceleration of the movement of the portion of the body of the user 102, direction of movement of the portion of the body of the user 102, angular velocity/acceleration of the portion of the body of the user 102, etc. The feedback component 218 can provide such feedback to assist the user 102 in connection with developing muscle memory when setting forth continuous traces corresponding to words. Types of feedback that can be provided via the feedback component 218 include auditory feedback, such as pitch, volume, certain sounds, etc. Accordingly, the user 102 can be provided with both visual and auditory feedback pertaining to a continuous trace set forth by the user 102 to assist the user 102 in developing muscle memory for continuous traces.
Actions that can be undertaken by the invocation recognizer component 204 are now set forth in greater detail. The invocation recognizer component 204 can be configured to recognize certain gestures and/or voice commands performed/output by the user 102 that indicate when the user 102 wishes to set forth a continuous trace. In an exemplary embodiment, the user 102 can set forth a command that defines a particular location relative to the sensor 106, wherein when the user 102 is at such position, the user 102 wishes to set forth a continuous trace to generate text. Accordingly, when the invocation recognizer component 204 receives data output by the sensor 106 that indicates that the user 102 is in the predefined location, the invocation recognizer component 204 can recognize that the user 102 desires to generate text through continuous strokes.
In another example, the user 102 can define a virtual input region. For example, the user can set forth a command (e.g., voice, gesture, or the like) that indicates a desire to begin generating text by way of a continuous sequence of gestures (e.g., in the air). The user 102 may then define a virtual input region, for instance, by drawing a square input region in the air with a particular finger. The sensor 106 can output data that is indicative of the position of the virtual input region, and the boundaries of the input region can be recognized by the invocation recognizer component 204. The display component 206 can cause the keyboard to be displayed such that it corresponds with the boundaries of the input region defined by the user 102. Thus, the keyboard is shown on the display screen 104 to fit the size of the input region defined by the user 102.
The depth of the plane defined by the input region can be utilized by the trace identifier component 208 to identify when the user 102 desires to set forth a continuous trace. For instance, when the finger of the user is within some threshold distance from such plane (and inside the boundaries of the input region), the trace identifier component 208 can recognize a movement as a portion of a continuous trace. In yet another exemplary embodiment, the user 102 may desire to use position of her head to set forth continuous traces. In such an embodiment, the user 102 can define a square input region near her head (based upon movement of her head, definition of the input region via hands or a finger, etc.). When the head of the user 102 is in such input region, the invocation recognizer component 204 can recognize such action as being an invocation, causing the trace identifier component 208 to interpret movements of the head of the user 102 as a portion of a continuous trace.
In still yet another exemplary embodiment, the user 102 can define an input region near her head, and the invocation recognizer component 204 can recognize that the user 102 desires to set forth a continuous trace when the user 102 enters the input region. Thereafter, the trace identifier component 208 can be configured to identify direction of gaze of the eyes of the user 102, such that the user 102 can employee eye gaze to generate continuous traces (e.g., where a blink can indicate a start and stop of the trace). Further, the trace identifier component 208 can identify when the continuous trace has completed based upon depth data output by the sensor 106. For instance, the user 102 can position her hand near the input region noted above when performing the continuous trace, and can move her hand out of the input region when the continuous trace has completed (e.g., move her hand closer to or further away from the display screen 104 and/or the sensor 106).
With reference now to
Furthermore, the decoder component 210 can optionally include a language model 304 for a particular language, such as English, Japanese, German, or the like. The language model 304 can be employed to probabilistically disambiguate between potential words based upon previous words set forth by the user and/or the language modeled by the language model 304.
The speech recognizer component 306 can be configured to receive spoken utterances of the user 102 and recognize words therein. In an exemplary embodiment, the user 102 can verbally output words while performing a continuous trace relative to the keyboard shown on the display screen 104, such that the spoken words supplement the continuous trace and vice versa. Thus, for example, the gesture model 302 can receive an indication of a most probable word output by the speech recognizer component 306 (where the spoken word was initially received from a microphone) and can utilize such output to further assist in decoding a continuous trace set forth in the air by the user 102. In another embodiment, the speech recognizer component 306 can receive a most probable word output by the gesture model 302 based upon a continuous trace identified by the trace identifier component 208, and can utilize such output as a feature for decoding the spoken word. The utilization of the speech recognizer component 306, the gesture model 302, and the language model 304, can enhance accuracy of decoding continuous traces.
Now referring to
The user 102 may then continuously move the portion of her body from the key 432 to the key 406, which is representative of the character “e.” Without pausing at the key 406, the user 102 can cause the portion of her body to move such that the portion of her body transitions to correspond to the key 438, which is representative of the character “l.” Again, without pausing, the user 102 can move the portion of her body such that it corresponds with the key 418, which is representative of the character “o.” This movement of the body of the user 102 creates a continuous trace 454, which begins at the key 432, reaches the key 406, turns to reach the key 438, and then completes upon reaching the key 418. The trace identifier component 208 can recognize the continuous trace 454 based upon data output by the sensor 106. The decoder component 210 can decode the continuous trace 454 and identify the word “hello” that is desirably set forth by the user 102. The output component 216 can then output the word to at least one of the applications 212-214. While the keyboard 400 is shown as including only character keys, it is to be understood that the keyboard 400 may include other keys, such as, a “Spacebar” key, an “Enter” key, a numerical keypad, etc.
With reference now to
Continuing with the example set forth above, the user 102 may desire to generate the word “hello” through a continuous trace. For instance, the invocation recognizer component 204 can recognize that the user 102 desires to generate text by setting forth a sequence of strokes with the body of the user 102. The user 102 may then position an appropriate portion of her body (e.g. an arm/hand), such that the portion of her body corresponds with the key 512, which is representative of the character “h.” For instance, the display component 206 can provide a visual indication that the arm of the user corresponds with the key 512. The user 102 may then move her arm from the key 512 to the key 502, which is representative of the character “e.” The user 102 may then move her arm, without pausing on the key 502, back to the key 512, which is representative of the character “l.” The user 102 may then pivot her arm upward such that it reaches the key 506, which is representative of the character “o.” By way of a gesture, moving out of the invocation region, etc., the user 102 can indicate that the continuous trace ceases at the key 506. The trace identifier component 208 can recognize a continuous trace 518 and the decoder component 210 can decode the continuous trace 518 to identify the word “hello.” The output component 216 may then output the word “hello” to at least one of the applications 212-214.
With reference now to
As movement of the user 102 may be imprecise, however, the decoder component 210 can be configured to cause the display component to 206 to display a plurality of possible words corresponding to the continuous trace 602 set forth by the user 102. For instance, the decoder component 210 can identify the words “dog,” “dig,” “dug,” and “fog” as being the four most probable words that correspond to the continuous trace 602. The user may then indicate through voice command, gesture, or the like, that the word “dog” was the word desirably set forth by the user 102, thereby causing the output component 216 to output the word “dog” to at least one of the applications 212-214. Additionally, this information can be provided as feedback to the decoder component 210, such that operation of the decoder component 210 can improve as the user 102 continues to use the system 200.
While not shown, it is to be understood that marking menus can be utilized in connection with generation of text by way of gestures, wherein a marking menu refers to temporary presentation of a selectable key responsive to the user selecting a key on a virtual keyboard. For instance, a key on the keyboard 400 can represent a plurality of punctuation characters; when the user selects such key, a plurality of selectable keys can be displayed (e.g., as an overlay to the keyboard 400), wherein each key represents a respective punctuation character.
There are numerous techniques that can be employed to invoke a marking menu associated with a particular key. In an exemplary embodiment, the user can position the portion of her body such that it corresponds (e.g., points to) the particular key for some threshold amount of time. This can indicate a selection of the particular key, which can cause several other selectable keys to overlay the keyboard 400. If the user chooses not to select one of such selectable keys (e.g., the user points to a different portion of the keyboard 400), then the marking menu can cease to be displayed. The user 102 can select one of the selectable keys of the marking menu by, for instance, pointing to such key for a threshold amount of time, moving the portion of her body such that a continuous trace corresponding to such movement passes over the key, using a voice command, etc. In another exemplary embodiment, the user 102 can invoke the marking menu with respect to a particular key by way of a voice command. For example, the user may be generating a word through a sequence of gestures, and may wish to cause a semicolon to follow the word. To invoke an appropriate marking menu, while performing the sequence of gestures, the user 102 can say “punctuation” (for example), which can cause a marking menu to be presented. The user 102 may then select a key corresponding to the semicolon by pointing to such key, performing a gesture over such key, etc. In yet another exemplary embodiment, eye gaze tracking techniques can be used to invoke marking menus, wherein if the user 102 continuously looks at a particular key for a threshold amount of time, the marking menu is invoked.
Turning now to
Again, in the example shown in
Moreover, the acts described herein may be computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media. The computer-executable instructions can include a routine, a sub-routine, programs, a thread of execution, and/or the like. Still further, results of acts of the methodologies can be stored in a computer-readable medium, displayed on a display device, and/or the like.
With reference now to
At 806, responsive to receiving the data, a continuous trace is identified. At 808, a word is identified based at least in part upon the continuous trace, and 810 at least one processing function is executed based at least in part upon the identifying of the word. For instance, the at least one processing function may be displaying the word on the display screen. In another example, the at least one processing function can be outputting the word to an application executing on a computing device.
As indicated above, prior to identifying the continuous trace, an invocation command can be detected. Responsive to the detection of the invocation command, a keyboard can be displayed on a portion of the display screen, wherein the keyboard comprises a plurality of character keys; each character key in the plurality of character keys being representative of at least one respective character. Accordingly, the continuous trace is performed relative to character keys in the keyboard. Specifically, it can be detected that the continuous trace corresponds to the portion of the display screen where the keyboard is displayed. The word desirably set forth by the user can be identified based at least in part upon identifying a first key over which the continuous trace passes and identifying a second key over which the continuous trace passes. Therefore, the word that is identified comprises a first character that is represented by the first key and a second character that is represented by the second key. The methodology 800 completes at 812.
Now referring to
If, however, an invocation gesture is detected at 908 based upon the first plurality of images and the first data received from the depth sensor, then the methodology 900 proceeds to 910, where responsive to detecting the invocation gesture, a keyboard is displayed on the display screen, wherein the keyboard comprises a plurality of character keys; each character key being representative of at least one respective character.
At 912, a second plurality of images are received from the camera, wherein the second plurality of images capture movement of the user relative to the display screen. At 914, second data is received from the depth sensor, wherein the second plurality of images and the second data capture movement of an arm of the user relative to keys of the keyboard. This movement of the arm is continuous in nature in that the arm need not pause over keys that represent characters included in a word desirably set forth by the user.
At 916, a continuous trace is identified based upon the second plurality of images and the second data. At 918, a word is identified based upon the continuous trace, wherein the word includes a first character represented by a first character key over which the continuous trace passed and a second character represented by a second character key over which the continuous trace passed. The methodology 900 completes at 920.
Referring now to
The computing device 1000 additionally includes a data store 1008 that is accessible by the processor 1002 by way of the system bus 1006. The data store 1008 may include executable instructions, imagery, language models, etc. The computing device 1000 also includes an input interface 1010 that allows external devices to communicate with the computing device 1000. For instance, the input interface 1010 may be used to receive instructions from an external computer device, from a user, etc. The computing device 1000 also includes an output interface 1012 that interfaces the computing device 1000 with one or more external devices. For example, the computing device 1000 may display text, images, etc. by way of the output interface 1012.
It is contemplated that the external devices that communicate with the computing device 1000 via the input interface 1010 and the output interface 1012 can be included in an environment that provides substantially any type of user interface with which a user can interact. Examples of user interface types include graphical user interfaces, natural user interfaces, and so forth. For instance, a graphical user interface may accept input from a user employing input device(s) such as a keyboard, mouse, remote control, or the like and provide output on an output device such as a display. Further, a natural user interface may enable a user to interact with the computing device 1000 in a manner free from constraints imposed by input device such as keyboards, mice, remote controls, and the like. Rather, a natural user interface can rely on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, machine intelligence, and so forth.
Additionally, while illustrated as a single system, it is to be understood that the computing device 1000 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 1000.
Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer-readable storage media. A computer-readable storage media can be any available storage media that can be accessed by a computer. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc (BD), where disks usually reproduce data magnetically and discs usually reproduce data optically with lasers. Further, a propagated signal is not included within the scope of computer-readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media.
Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methodologies for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the details description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.