When a user would like to enter one or more commands into a computing device, the user can access an input component, such as a keyboard and/or a mouse of the computing device. The user can use the keyboard and/or mouse to enter one or more inputs for the computing device to interpret. The computing device can proceed to identify and execute a command corresponding to the input received from the keyboard and/or the mouse.
Various features and advantages of the disclosed embodiments will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example, features of the disclosed embodiments.
A portable computing device includes a first panel and a second panel. In one implementation the first panel includes a rear panel of the portable computing device and the second panel includes a front panel of the portable computing device. The portable computing device includes a sensor, such as a touch surface, a touchpad, an image capture component, and/or a proximity sensor to detect for a hand gesture at the first panel of the portable computing device. The hand gesture includes a user touching or repositioning the user's finger(s) or palm at the rear panel of the portable computing device. In one implementation, locations at the first panel correspond to locations of a virtual keyboard of the portable computing device.
In response to the sensor detecting a hand gesture from the user, the portable computing device predicts at least one input for the portable computing device based on the hand gesture. For the purposes of this application, a predicted input includes an input which is anticipated by the portable computing device based on information detected from the hand gesture. The detected information includes a portion of recognized information utilized by the portable computing device to identify an input for the portable computing device. In one implementation, if the detected information from the hand gesture corresponds to one or more alphanumeric characters of a virtual keyboard, one or more predicted inputs for the portable computing device include words which match, begin with, end with and/or contain the alphanumeric characters. For example, if the detected information from the hand gesture is the alphanumeric characters “rob,” the predicted inputs can include “Rob,” “Robert,” “robbery,” and “probe.”
In response to the portable computing device identifying at least one predicted input, a display component, such as a touch screen displays the predicted inputs for a user to select. The display component is included at the second panel of the portable computing device. If the user accesses the touch screen to select one of the predicted inputs, the predicted input is received by the portable computing device as an input for the portable computing device. As a result, an amount of accidental inputs for the portable computing device can be reduced by predicting inputs for the portable computing device based on a hand gesture detected at a rear panel and displaying the predicted inputs at a front panel for the user to select.
As shown in
As shown in
A sensor 130 of the portable computing device 100 is used to detect for a hand gesture by detecting for finger(s) or a palm of a user at the first panel 170. The user can be any person which can enter inputs for the portable computing device 100 by accessing the first panel 170. For the purposes of this application, the sensor 130 is a hardware component of the portable computing device 100, such as a touch surface, a touchpad, an image capture component, a proximity sensor and/or any additional device which can detect for a hand of the user at the first panel of the portable computing device 100.
The sensor 130 detects for finger(s) and/or a palm of the user touching or within proximity of the first panel 170. If the sensor 130 detects a hand gesture 140 at the first panel, the controller 120 and/or the input application receive information of the hand gesture 140. The information of the hand gesture 140 can include coordinates of the first panel 170 accessed by the hand gesture 140. In one implementation, the information also includes whether the hand gesture 140 includes a finger or palm reposition, a number of fingers used in the hand gesture 140, and/or an amount of pressure used by the hand gesture 140.
The controller 120 and/or the input application use the detected information of the hand gesture 140 to predict one or more inputs 195 for the portable computing device 100. For the purposes of this application, a predicted input 190 includes an input 195 for the portable computing device 100 which is anticipated by the controller 120 and/or the input application based on the detected information from the hand gesture 140. For the purposes of this application, an input is anticipated by the controller 120 and/or the input application if the detected information from the hand gesture matches a portion or all of the recognized information corresponding to an input 195 for the portable computing device 100.
In one example, a predicted input 190 for the portable computing device 100 is an input 195 for alphanumeric character(s) for the portable computing device 100. In another example, the predicted input 190 can be an input 195 to select content of the portable computing device 100, an input 195 to launch content of the portable computing device 100, an input 195 to launch a menu for content, an input 195 to navigate content or the portable computing device 100, and/or an input 195 to switch between modes of operation of the portable computing device 100.
When identifying a predicted input 190, the controller 120 and/or the input application compare the detected information from the hand gesture 140 to recognized information corresponding to an input. If the detected information includes all or a portion of the recognized information corresponding to an input, the corresponding input will be identified by the controller 120 and/or the input application as a predicted input 190 for the portable computing device 100.
In one implementation, the controller 120 and/or the input application access a table, database, and/or list of inputs. The table, database, and/or list of inputs can be local or remote to the portable computing device 100 and include recognized inputs for the portable computing device 100 and their corresponding information. The controller 120 and/or the input application determine if the detected information from the hand gesture 140 matches a portion of corresponding information of any of the recognized inputs. If the detected information matches a portion of corresponding information for any of the recognized inputs, the recognized input will be identified as a predicted input 190.
In one example, the detected information from the hand gesture 140 includes accessed coordinates corresponding to a virtual keyboard with alphanumeric characters “ham.” The controller 120 and/or the input application compare the detected information to information of recognized inputs and determine that “ham” is a portion of the words “sham,” “hamburger,” and “ham.” In response, “sham, “hamburger,” and “ham” are identified to be predicted inputs 190 based on the hand gesture 140.
In another implementation, the detected information from the hand gesture 140 does not correspond to locations of a virtual keyboard. The detected information specifies the hand gesture 140 is repositioning from Left-to-Right. The controller 120 and/or the input application compare the detected information to information of recognized inputs and determines that recognized inputs 1) “navigate next” includes information specifying for a hand gesture to reposition from Left-to-Right and 2) “bring up menu” includes information specifying for a hand gesture to reposition Up First and then Left-to-Right. In response, the controller 120 and/or the input application identify the “navigate next” and “bring up menu” as predicted inputs 190.
In response to identifying one or more predicted inputs 190, the controller 120 and/or the input application instruct a display component 160, such as a touch screen, to display the predicted inputs 190. The display component 160 is included at the second panel 175 of the portable computing device 100. The display component 160 can display the predicted inputs 190 at corner locations of the display component 160, within reach of a finger, such as a thumb, of the user. The corner locations can include a left edge, a right edge, a top edge, and/or a bottom edge of the display component 160.
If the display component 160 is a touch screen, the user selects one of the predicted inputs 190 by touching the corresponding predicted input 190 displayed on the touch screen. In other implementations, other sensors coupled to the second panel 175, such as a touch surface, a touchpad, an image capture component, and/or a proximity sensor can be used instead of a touch screen to detect for the user selecting a predicted input 190. In response to the user selecting one of the displayed predicted inputs 190, the controller 120 and/or the input application receive the selected predicted input 190 as an input 195 for the portable computing device 100. Receiving the input 190 can include the controller 120 and/or the input application executing the input 195 as a command for the portable computing device 100.
As shown in
The sensor 130 can detect for finger(s) and/or a palm of the user 205 touching or coming within proximity of the rear panel 270. When detecting the hand gesture 140, the sensor 130 detects coordinates of the rear panel 270 accessed by the hand gesture 140, a number of fingers used for the hand gesture 140, whether the hand gesture 140 is stationary or repositioning, and/or an amount of pressure used by the hand gesture 140. The sensor 130 passes detected information of the hand gesture 130 to a controller and/or an input application to identify one or more predicted inputs 190 for the portable computing device 100.
In one implementation, as shown in
In one implementation, the sensor 130 can also detect for a second hand gesture at the rear panel 270. The second hand gesture can be made with a second hand of the user 205. The sensor 130 can detect for the second hand gesture in parallel with detecting for the first hand gesture 140. Similar to when detecting for the first hand gesture 140, the sensor 130 detects for finger(s) and/or a palm of the user 205 touching or coming within proximity of the rear panel 270 and pass detected information of the second hand gesture to the controller and/or the input application. If both a first hand gesture 140 and a second hand gesture are detected, the controller and/or the input application use detected information from both of the first and the second hand gestures when predicting inputs for the portable computing device 100.
In one example, a predicted input 190 for the portable computing device 100 is an input 195 for alphanumeric character(s) for the portable computing device 100. In another implementation, the predicted input 190 can be an input 195 to select content of the portable computing device 100, an input 195 to launch content of the portable computing device 100, an input 195 to launch a menu for content, an input 195 to navigate content or the portable computing device 100, and/or an input 195 to switch between modes of operation of the portable computing device 100. The content can include a file, media, object and/or a website accessible to the portable computing device 100.
The predicted inputs 190 can be displayed as bars, buttons, icons, and/or objects on the display component 160. In one implementation, the predicts inputs 190 are displays at one or more corners of the display component 160 such that they are easily accessible to a finger of the user 205 holding the portable computing device 100. For example, the user 205 can use a thumb or index finger to select one of the predicted inputs 190 rendered at a corner of the display component 160.
If the display component 160 is a touch screen, the touch screen can detect for the user 205 selecting one of the predicted inputs 190 displayed on the touch screen. In another implementation, if the sensor 130 includes a first portion and a second portion, the first portion of the sensor 130 can detect for the user 205 selecting one of the predicts inputs 190. In other implementations, the portable computing device 100 can further include an input component (not shown) at the front panel 275 to detect for the user 205 navigating the predicted inputs 190 to select one of them. The input component can include one or more buttons and/or touch pad to navigate between predicted inputs 190 and to select a predicted input 190. In response to the user 205 selecting one of the predicted inputs, the controller and/or the input application can receive the predicted input 190 as an input 195 for the portable computing device 100.
As shown in
The controller 120 and/or the input application 310 compare the accessed coordinates at the rear panel to locations of the virtual keyboard to determine which alphanumeric character of the virtual keyboard have been accessed. As shown in
As shown in
If the touch screen detects the user select one of the predicted inputs, the controller and/or the input application proceed to receive an input for the portable computing device at 550. If a user selects the option to reject all of the predicted inputs, the controller and/or the input application can continue to identify one or more predicted inputs for the portable computing device in response to detecting one or more hand gestures at the rear panel of the portable computing device. In another implementation, additional sensor components, such as an image capture component, a proximity sensor, a touch sensor, and/or any additional sensor can be used as opposed to the touch screen to detect a user select one of the predicted inputs or an option to reject all of the predicted inputs. The method is then complete. In other embodiments, the method of
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2013/072026 | 2/28/2013 | WO | 00 |