Embodiments of the invention relate to systems and methods for integrating physical input devices with virtual input devices displayed on a touch-sensitive screen. In some implementations, a physical input device (keyboard) is physically and communicatively paired with a computing device having a touch-sensitive display. The touch sensitive display can display a virtual keyboard having a number of characters, emoji, predictive word selections, and more, along the edge of the touch-sensitive display nearest to the physical keyboard such that more input options are effectively at the users disposal—that is, buttons from the physical keyboard and virtual buttons or keys on the touch sensitive display are all at a user's finger tips for a more efficient and richer user interface (UI).
In certain embodiments, an input device includes a housing, a plurality of physical keys disposed in the housing, and a processor to generate an output signal in response to one of the plurality of physical keys being activated, and control an operation of one or more virtual keys displayed on a touch-sensitive display, the physical keys and the one or more virtual keys configured for concurrent operation. The processor can be further configured to communicate with a software application operating on the touch-sensitive display and control a selection of the one or more virtual keys based on features of the software application. In some embodiments, the virtual keys can be one or more emojis. The emojis can include a context specific emoji corresponding to at least a portion of a recently input word or phrase on the plurality of physical keys, trending emojis associated with a social media platform, recently used emojis, as well as a second set of emojis dynamically generated in response to a selection of a first emoji, where the second set of emojis contextually related to the first emoji.
In some embodiments, the one or more virtual buttons can include one or more function keys or predictive words or phrases corresponding to an input on the plurality of physical keys. The plurality of physical keys can include a key to enable and disable the one or more virtual keys. The processor can be configured to control an aesthetic presentation of the virtual keys including a color scheme.
In further embodiments, a method of operating an input device includes generating, by a processor, an output signal corresponding to an activation of one or more physical keys on the input device, sending, by the processor, the output signal to a second input device having a touch-sensitive display, sending a control signal, by the processor, to generate and display a virtual key on the touch-sensitive display, and controlling an operation of the virtual key displayed on a touch-sensitive display, where the physical key and the virtual key are concurrently operable.
In some implementations, the method includes associating an emoji with the virtual key. The emoji can be a context specific emoji corresponding to at least a portion of a recently input word or phrase output by the input device. The method may include determining a context of the first emoji based on at least a portion of a recently input word or phrase output by the input device, generating a second control signal, by the processor, configured to generate and display a second virtual key on the touch-sensitive display, and associating a second emoji with the second virtual key, wherein the second emoji is related to the first emoji based on the context of the first emoji. The emoji can a top trending emoji on a social media platform or a recently used emoji. The virtual key can include a predictive word or phrase corresponding to an output of the input device.
In some embodiments, the method includes receiving an input corresponding to an activation of an on/off key on the input device, generating, by the processor, a second output signal corresponding to the activation of an on/off key, and sending the second output signal to the second input device, wherein the second output signal successively enables and disables the virtual key. The method may further include controlling an aesthetic presentation of the virtual keys including a color scheme based on aspects and/or metadata of/from the input device.
In certain embodiments, a method includes displaying a virtual keyboard having a plurality of keys on a touch-sensitive display, receiving an input signal corresponding to a selection of at least one of the plurality of keys on the virtual keyboard, receiving an output signal from a peripheral input device having a plurality of keys, and generating and displaying an output on the touch-sensitive display corresponding to the input signal and output signal. The processor concurrently displays the virtual keyboard, processes and displays data associated with the input signal in the displayed output, and processes and displays data associated with the output signal in the displayed output.
In some embodiments, the method includes associating an emoji with the virtual keyboard. The emoji can be context specific and may correspond to at least a portion of a recently input word or phrase output by the peripheral input device. The emoji can be a top trending emoji on a social media platform or a recently used emoji. In some cases, the virtual keyboard includes a selectable predictive word or phrase corresponding to a recently input word or phrase output by the peripheral input device.
In further embodiments, the method includes receiving, by the processor, a second output signal corresponding to the activation of an on/off key on the peripheral input device, and enabling or disabling the virtual keyboard in response to receiving the second output signal. The method can further include controlling an aesthetic presentation of the virtual keyboard including a color scheme based on metadata received from the peripheral input device.
Embodiments of the invention relate to systems and methods for integrating physical input devices with virtual input devices displayed on a touch-sensitive screen. In some implementations, a physical input device (keyboard) is physically and communicatively paired with a computing device having a touch-sensitive display. The touch sensitive display can display a virtual keyboard having a number of characters, emoji, predictive word selections, and more, along the edge of the touch-sensitive display nearest to the physical keyboard such that more input options are effectively at the users disposal. That is, buttons from the physical keyboard and virtual buttons or keys on the touch sensitive display are all at a user's finger tips for a more efficient and richer user interface (UI), making for smart, adaptive, context aware input devices with the fusion of virtual and physical components.
Some embodiments relate to an input device (e.g., keyboard) communicatively coupled to a tablet or smart device having a touch-sensitive display. The keyboard can include a housing, a plurality of physical keys disposed in the housing, and a processor to generate an output signal in response to one of the plurality of physical keys being activated, and control an operation of one or more virtual keys displayed on the touch-sensitive display, the physical keys and the one or more virtual keys configured for concurrent operation. The processor can be further configured to communicate with a software application operating on the touch-sensitive display and control a selection of the one or more virtual keys based on features of the software application. In some embodiments, the virtual keys can be one or more emojis. The emojis can include a context specific emoji corresponding to at least a portion of a recently input word or phrase on the plurality of physical keys, trending emojis associated with a social media platform, recently used emojis, as well as a second set of emojis dynamically generated in response to a selection of a first emoji, where the second set of emojis contextually related to the first emoji.
Certain embodiments relate to a computing device having a touch-sensitive display that can perform a method that includes displaying a virtual keyboard having a plurality of keys on a touch-sensitive display, receiving an input signal corresponding to a selection of at least one of the plurality of keys on the virtual keyboard, receiving an output signal from a peripheral input device having a plurality of keys, and generating and displaying an output on the touch-sensitive display corresponding to the input signal and output signal. The processor concurrently displays the virtual keyboard, processes and displays data associated with the input signal in the displayed output, and processes and displays data associated with the output signal in the displayed output.
It should be noted that the embodiments described herein are not all-inclusive. Any input device or computing device with a display can utilize the inventive concepts described herein. Although this document primarily focuses on keyboards (i.e., more broadly—input devices) and tablet computers, it should be understood that the IP can cover fablets, desktops, living room keyboards (i.e., all keyboards), smart phones, smart watches, and the like.
As illustrated in
Embodiments of the present invention assist users with entry of both text and other symbols. By utilizing both a physical keyboard, which provides a plurality of fixed, physical data entry keys, and the touch-screen capabilities of the tablet, a seamless flow of inputs from the physical keyboard 110 and the utility bar 120 can be achieved. As described herein, the elements of the utility bar 120 can be populated based on inputs received through the physical keyboard. Thus, embodiments provide a utility bar, which can be positioned on the screen of the tablet adjacent or close to the physical keyboard, and together with physical keyboard, can be used for a wide variety of types of inputs that are richer and more intelligent than that provided using conventional techniques. As will be described in additional detail in subsequent sections, the utility bar is not limited to the display of emoji, but can be utilized to display letters, words, other types of text, richer emoji, stickers, graphic files such as GIFs, and the like. By providing the ability to display, and utilize as inputs, elements that are not available using conventional physical keyboards, the flexibility of the input process available to the user is increased. Additionally, software operating in conjunction with embodiments of the present invention can provide smart selections that enable the user to quickly select inputs form a pre-populated or pre-seeded list. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
Although not illustrated in
The media, including emojis, that are displayed in the utility bar can be presented in response to user input, web trends, or the like. Predictive emojis can be based on the output of the physical keyboard and draw on a word suggestion lexicon. The predictive emoji can be based on simple word pairs, word-category pairs, history for the user, and the like. For example, typing “piz” could result in a predictive emoji illustrating a slice of pizza. Typing “Christm” could result in emoji related to the Christmas holiday season, including snowmen with Santa hats, and the like.
Trending emoji can be displayed based on trend analysis for internet searches. The inventors have determined that emoji demonstrate trending behaviors and the trending emoji displayed on the utility bar could be based on knowledge of these trends and presented to the user based on global trends, app specific trends, friend trends, celebrity usage trends, or the like. A top trending emoji can include a current most popular emoji, a set of most popular emoji (i.e., more than one), most popular emoji of a particular type (e.g., smiley emojis, celebrity emojis, etc.), and the like. Those of ordinary skill in the art would understand the many variations and possibilities with respect to top trending emojis.
Although emojis are illustrated in
As illustrated in
Referring to
In some embodiments, a software application creates the seamless integration of physical and virtual input devices illustrated in
As illustrated in
In conventional keyboard/tablet implementations, the process to insert elements other than text in communications is not seamless. Typically, after some text is entered, the user selects the emoji or graphic icon, which then modifies the screen to provide emoji or graphics. These emoji or graphics can be searched to find the desired element, the user selects/copies the desired element, and then pastes the desired element into the text window. In contrast with this lengthy and cumbersome process, embodiments of the present invention enable a user to type in text using the physical keyboard and then add the desired elements from the utility bar by simply clicking on the desired element, seamlessly providing for data entry from the physical keyboard and the OSK.
As an example, a user may enter the text: “It is raining.” In response to this text entry, a set of emojis related to rain and inclement weather can be displayed on the utility bar. Additionally, a database of images/videos, including, for example, GIFs, stickers, or other images, either stored locally or remotely, can be searched so that a set of images can be displayed on the utility bar. The user can then select one these elements to append the sentence “It is raining.” In order to generate advertising revenue, an app designer can provide seeded expressions and elements. For instance, if the user enters text related to the weather, an umbrella with a product logo could be displayed as an emoji, an image of a person wearing a hat with a product logo could be displayed as a graphic, or the like. Personalization of the elements on the utility bar can also be implemented, enabling a user to create graphics or emoji that can be used in a personalized manner. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
In some embodiments, a keystroke on the physical keyboard can be utilized to not only modify the number of rows of icons presented by the OSK, but to rearrange the order in which the icons are presented, expand a given section of the utility bar, or the like. As an example, although
In some embodiments, a second input device, such as a computer mouse, can cause the virtual keyboard to amplify certain media (e.g., emojis). Amplification can be used for a wide variety of applications beyond modifying emojis. For instance, “amplifying” may cause a change to a font (e.g., size, capitals, color, style), or may include customizing or modifying certain images, GIFs, avatars (e.g., amplify facial expression/posture, amplify speech synthesis, etc.), stickers (e.g., each successive sticker being increasingly exclamatory), and the like. In some cases, amplifying can have several stages or modes of operation. For instance, a first instantiation of “amplify” may cause a first outcome, and successive instantiations may cause additional levels of “amplification.” For example, amplifying a word in a utility bar (i.e., on the virtual keyboard) can cause the word to be displayed in all-caps. Amplifying again may cause the capitalized word to be displayed in a bolded font. A third amplification may cause the bolded and capitalized word to be displayed in a red font.
In certain embodiments, word suggestions can be amplified. For instance, “angry” can be a first suggestion, followed by “really angry,” “furious”, etc. In another example, a string of increasingly amplified word or acronym suggestions can include LQTM (laugh quietly to myself), LOL (laugh out loud), ROFL (rolling on the floor laughing), etc. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
Thus, as illustrated in
In some embodiments, in addition to information on the available and active hosts, the OSK can be utilized to display information on the status of the physical keyboard, including the battery status for the physical keyboard, low battery warnings, or the like. As illustrated in
According to some embodiments of the present invention, the look and feel of the physical and virtual keyboards are configured to provide a similar user experience. As an example, if the physical keyboard has a particular background color, for instance yellow, then the virtual keyboard could utilize the same background color to provide a substantially seamless look at the intersection of the physical keyboard and the virtual keyboard. The fonts/sizes that are utilized on the physical keyboard can be communicated to the tablet so that the tablet can use the same fonts/sizes for text displayed on the virtual keyboard. In addition, the key size and shape and other characteristics can be matched. In some embodiments, the background color for the media such as emoji displayed on the virtual keyboard can be matched to the background color of the physical keyboard for an integrated appearance. Thus, the user experience can be customized to provide a consistent look and feel as the user's eye transitions from the physical to virtual keyboards or vice versa.
In one implementation, the physical keyboard communicates information about the physical keyboard to the tablet to enable this matching of the color/texture and look of the physical and virtual components, thereby creating an integrated feel. The information about the physical keyboard can include color/shape/texture, layout/language, battery status, and the like. Given this information, an optimized/paired configuration is provided in which the virtual keyboard is “matched” to the physical keyboard. If, for example, the physical keyboard does not include a number row, the virtual keyboard can implement a number row as a bottom row of the utility bar, presenting the number row in the same color/font/etc. to enhance the functionality of the combined system integrating the physical and virtual keyboards. In some embodiments, the user experience is customized automatically to provide the same look and feel between the physical and virtual keyboards when the physical keyboard and the tablet are paired, whereas, in other embodiments, such customization is performed based on user inputs.
In the example illustrated in
Given the multi-language capability provided by embodiments of the present invention, not only suggested words and special characters can be provided to the user, but also other media that is country or language specific. For example, emojis can be displayed that are customized to particular languages, for instance, presenting a combination of text and emojis through the utility bar, thereby providing for both text entry and emoji interaction optimized for multi-language (e.g., dual-language) input. As the user types on the physical keyboard, the inputs received through the physical keyboard can be used to populate the virtual keyboard with multi-language translations, word suggestions, special characters, country-specific emojis, and the like. In a particular embodiment, as the user types in a first language on the physical keyboard, a translation of the input text can be generated and displayed on the virtual keyboard, providing not only for automated translation, but word predictions in multiple languages, including the first language.
In one implementation, information related to the physical keyboard can be utilized to display missing or special symbols on the virtual keyboard. For example, if a user is typing in German on an English keyboard, German language keys including the umlaut symbols and the eszett symbol can be shown on the virtual keyboard as the user types, thereby providing for entry in a second language via the virtual keyboard while using a physical keyboard designed for a first language.
Certain embodiments of the invention are directed to an optimized prediction bar on an OSK used in conjunction with a physical keyboard. A touch-enabled LCD on the keyboard may be too expensive in some configurations. Some of this functionality can be supported through an application-enabled “predictions/app switch” bar on the tablet screen.
In some embodiments, input device 910 can include a number of physical keys disposed on keyboard 920 and, in conjunction with processor 930, can generate an output signal corresponding to a user pressing one or more of the physical keys, as would be understood by one of ordinary skill in the art. Input device 910 can generate any suitable output including numbers, letters, symbols, commands, macros, media (e.g., video and/or audio, GIFs, etc.), and the like. Processing unit(s) 930 can include a single processor, multi-core processor, or multiple processors and may execute instructions in hardware, firmware, or software. In some embodiments, input device 910 may not include an on-board processor.
Memory 980 can include various memory units such as a system memory, a read only memory (ROM), and permanent storage device(s) (e.g., magnetic, solid state, or optical media, flash memory, etc.). The ROM can store static data and instructions required by processing unit(s) 990 and other modules of the system 900 (e.g., application 970). The system memory can store some or all of the instructions and data that the processor needs at runtime.
Application 970 can be a software application that may be run by processor 990, according to some embodiments. Application 970 can perform the various embodiments of generating and controlling an on-screen virtual keyboard, as described, e.g., in methods 1000-1200, according to some implementations. Application 970 can operate on any suitable operating system, including but not limited to, Microsoft Windows, Mac OS, Apple iOS, Android, or the like.
System 900 can be used to implement the various embodiments described herein (e.g.,
At step 1010, method 1000 can include generating an output signal by an input device that corresponds to an activation of one or more physical keys on the input device, according to certain embodiments. For instance, this can include pressing a key on a physical keyboard and generating a corresponding alphanumeric output signal. Any output signal is possible including numbers, letters, symbols (e.g., emojis), commands, macros, etc. The input device can be a tablet cover, keyboard, number pad, smart phone, computer mouse, touch pad, hybrid input device, and the like, as would be appreciated by one of ordinary skill in the art.
At step 1020, method 1000 can include sending the output signal to a second input device having a touch-sensitive display, according to certain embodiments. For example, step 1020 may include sending an output signal from a physical keyboard to a tablet computer with a touch-sensitive display. The second input device can include any suitable computing device with a touch-sensitive display including a desktop computer, laptop computer, tablet computer, smart phone, wearable technology (e.g., smart watch), or the like.
At step 1030, method 1000 can include sending a control signal to generate and display a virtual keyboard including the one or more virtual keys on the touch-sensitive display, according to certain embodiments. The virtual keyboard can be of any suitable size, shape, or location on the display. In some embodiments, the virtual keyboard may be include multiple sections (e.g., a virtual keyboard on the bottom left of a display and a second virtual keyboard on the bottom right of the display). The virtual keyboard can include a number of additional virtual keys in addition to the one or more virtual keys corresponding to the activated one or more physical keys.
At step 1040, method 1000 can include controlling an operation of the virtual key displayed on the touch-sensitive display, where the physical key and the virtual key are concurrently operable. That is, both the physical key (e.g., from a physical keyboard) and the virtual key (e.g., on the virtual keyboard) can be accessed by a user at the same time (simultaneously or substantially simultaneously—i.e., contemporaneously).
As mentioned above, any output signal can be possible including numbers, letters, symbols, commands, macros, media (e.g., video and/or audio), etc. In some embodiments, method 1000 can include associating an emoji with the virtual key. The emoji can be a context specific emoji corresponding to at least a portion of a recently input word or phrase output by the input device. The emoji may include a top trending emoji on a social media platform, a recently used emoji, a predictive word or phrase corresponding to an output of the input device, or other output, as would be appreciated by one or ordinary skill in the art with the benefit of this disclosure.
In certain embodiments, method 1000 can further include determining a context of the emoji based on at least a portion of a recently input word or phrase output by the input device. That is, one or more emojis (or symbols) can be generated based on what a user is typing. For instance, if the user begins typing the word “pizza,” then emojis or symbols associated with pizza can be generated (e.g., a slice of pizza, a restaurant, coupons, advertisements, etc.). Once the context of the emoji or symbol is determined, method 1000 can further include generating a second control signal configured to generate and display a second virtual key on virtual keyboard of the touch-sensitive display, and associating a second emoji with the second virtual key, where the second emoji is related to the emoji based on the context of the emoji.
In some embodiments, method 1000 can further include receiving an input corresponding to an activation of an on/off key on the input device and generating a second output signal corresponding to the activation of an on/off key, according to certain embodiments. The generated second output signal can be sent to the input device to successively enable and disable the virtual key. In certain implementations, method 1000 can include controlling an aesthetic presentation of the virtual keys (and/or virtual keyboard) including a color scheme based on aspects of the input device. Other aesthetic qualities may be adjusted on the virtual keys and/or virtual keyboard including physical attributes of the input device (e.g., shape, graphical designs on the input device, advertisements or manufacturer's indicia, and the like).
It should be appreciated that the specific steps illustrated in
The method also includes generating, at the physical keyboard and in response to activation of a physical key by a user, an output signal associated with the activation of the physical key. The output signal may be one of a series of output signals generated as the user types using the physical keyboard. The method additionally includes transmitting the output signal to the mobile device through the communications link, for example, over Bluetooth.
The method further includes displaying a virtual keyboard on the touch-sensitive screen of the of the mobile device in response to the transmitted output signal. The virtual keyboard can include any suitable alphanumeric character, symbol, or media (audio or video link). In some cases, the virtual keyboard can include a set of emojis displayed on a portion of the touch-sensitive screen. The portion of the touch-sensitive screen may be divided into sections that provide display space for groups of emojis, including recently used emojis, predictive emojis, and the like. Based on the output signals from the physical keyboard, which may be considered as input signals to the mobile device, the media displayed on the screen can be varied depending on the particular application. For example, as the user type letters, the processor of the mobile device can process the received letters and suggest words that can then be displayed, and selected, using the touch-screen. Thus, embodiments of the present invention provide for an interaction between the physical keyboard and the touch-screen of the mobile device that expand the functionality provided by these elements independently.
It should be appreciated that the specific steps illustrated in
At step 1220, method 1200 can include receiving, by the processor, an input signal corresponding to a selection of at least one of the plurality of keys on the virtual keyboard, according to certain embodiments. For example, a user may physically press a virtual key (or keys) on a touch-sensitive display (e.g., on a tablet computer).
At step 1230, method 1200 can include receiving by the processor, an output signal from a peripheral input device having a plurality of keys, according to certain embodiments. The output signal can correspond to a key press on one or more of the plurality of keys. The peripheral device can be a tablet cover (e.g., with one or more keys), keyboard, number pad, smart phone, computer mouse, touch pad, hybrid input device, and the like, as would be appreciated by one of ordinary skill in the art.
At step 1240, method 1200 can include generating and displaying, by the processor, an output on the touch-sensitive display corresponding to the input signal and output signal, according to certain embodiments. The output on the touch-sensitive display can be an alphanumeric character (in any language), a symbol, an emoji, a word, a phrase, a string of characters, media (video, audio, etc.), or other type of output on the virtual keyboard. In some embodiments, the processor concurrently displays the virtual keyboard, processes and displays data associated with the input signal in the displayed output, and processes and displays data associated with the output signal in the displayed output.
At step 1250, method 1200 can further include associating an emoji with the virtual keyboard, according to certain embodiments. The emoji can be context specific and may correspond to at least a portion of a recently input word or phrase output by the peripheral input device. In some cases, the emoji can be a top trending emoji on a social media platform, or may be a recently used emoji. The virtual keyboard can include a selectable predictive word or phrase corresponding to a recently input word or phrase output by the peripheral input device (or the virtual keyboard), according to certain embodiments.
In some embodiments, method 1200 can include receiving, by the processor, a second output signal corresponding to the activation of an on/off key on the peripheral input device, and enabling or disabling the virtual keyboard in response to receiving the second output signal. In further implementations, method 1200 can include controlling an aesthetic presentation of the virtual keyboard including a color scheme based on metadata received from the peripheral input device, according to certain embodiments. Other aesthetic qualities may be adjusted on the virtual keys and/or virtual keyboard as discussed above with respect to
It should be appreciated that the specific steps illustrated in
It is also understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims.
Processing unit(s) 1310 can include a single processor, multi-core processor, or multiple processors and may execute instructions in hardware, firmware, or software, such as instructions stored in storage subsystem 1320. The storage subsystem 1320 can include various memory units such as a system memory, a read only memory (ROM), and permanent storage device(s) (e.g., magnetic, solid state, or optical media, flash memory, etc.). The ROM can store static data and instructions required by processing unit(s) 1310 and other modules of the system 2400. The system memory can store some or all of the instructions and data that the processor needs at runtime. In some embodiments, processing unit(s) 1310 can include processor 990 of
In some embodiments, storage subsystem 1320 can store one or more of data or software programs to be executed or controlled by processing unit(s) 1310, such as the OSK software (e.g., application 970), as further described above with respect to
It will be appreciated that the computer system 1300 is illustrative and that variations and modifications are possible. Computer system 1300 can have other capabilities not specifically described here in detail. Further, while computer system 1300 is described with reference to particular blocks, it is to be understood that these blocks are defined for convenience of description and are not intended to imply a particular physical arrangement of component parts. Further, the blocks need not correspond to physically distinct components. Blocks can be configured to perform various operations, e.g., by programming a processor or providing appropriate control circuitry, and various blocks might or might not be reconfigurable depending on how the initial configuration is obtained. Embodiments of the present invention can be realized in a variety of apparatus including electronic devices implemented using any combination of circuitry and software.
Aspects of system 1300 may be implemented in many different configurations. In some embodiments, system 1300 may be configured as a distributed system where one or more components of system 1300 are distributed over one or more networks in the cloud.
While the invention has been described with respect to specific embodiments, one of ordinary skill in the art will recognize that numerous modifications are possible. Thus, although the invention has been described with respect to specific embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.
It should be understood that terms such as “input device,” “peripheral device,” and the like, are used interchangeably throughout this document and are not limiting.
The above disclosure provides examples and aspects relating to various embodiments within the scope of claims, appended hereto or later added in accordance with applicable law. However, these examples are not limiting as to how any disclosed aspect may be implemented.
All the features disclosed in this specification (including any accompanying claims, abstract, and drawings) can be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. §112(f). In particular, the use of “step of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. §112(f).
This application is a non-provisional application and claims the benefit and priority of U.S. Provisional Application No. 62/147,577, filed on Apr. 14, 2015, titled “PHYSICAL AND VIRTUAL INPUT DEVICE INTEGRATION,” which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62147577 | Apr 2015 | US |