In mobile computer systems such as tablets, smartphones, and portable game systems, a touch-screen display may serve as the primary user-input mechanism. With some applications, however, the required user input is more easily furnished via a handheld game controller having one or more joysticks, triggers, and pushbuttons. Accordingly, some mobile computer systems are configured to pair with an external game controller to accept user input therefrom, especially when running video-game applications. A disadvantage of this approach becomes evident, however, when the user leaves the video-game application and attempts to access other user-interface (UI) elements—e.g., elements configured primarily for touch input. The user then must choose from among equally undesirable options: clumsily navigating the UI elements with the game controller, taking a hand off the controller to manipulate the touch-screen display, or similarly interrupting the user experience by using a mouse or other pointing device, which often has to be manually paired with the computer system.
The inventors herein have recognized the disadvantages noted above and now disclose series of approaches to address them. This disclosure will be better understood from reading the following Detailed Description with reference to the attached drawing figures, wherein:
Aspects of this disclosure will now be described by example and with reference to the illustrated embodiments listed above. Components, process steps, and other elements that may be substantially the same in one or more embodiments are identified coordinately and described with minimal repetition. It will be noted, however, that elements identified coordinately may also differ to some degree. It will be further noted that the drawing figures included in this disclosure are schematic and generally not drawn to scale. Rather, the various drawing scales, aspect ratios, and numbers of components shown in the figures may be purposely distorted to make certain features or relationships easier to see.
Operating together, memory subsystem 30A and logic subsystem 28 instantiate various software constructs in computer system 24—an operating system (OS) 34, and applications 36A, 36B, etc. The OS may include a kernel, such as a Linux® kernel, in addition to drivers and a framework. In some embodiments, the memory and logic subsystems may also instantiate one or more services 38, and any data structure useful for the operation of the computer system.
Input device 26 is configured to transduce the user's hand movements into data and to provide such data to computer system 24 as user input. To this end, the input device includes transduction componentry 40 and input-output (I/O) componentry 42. To support the functions of the transduction and I/O componentry, the input device may include a dedicated microcontroller 44 and at least some memory 30B operatively coupled to the microcontroller.
Transduction componentry 40 is configured to transduce one or more hand movements of the user into position data. Naturally, such hand movements may include movements of the user's fingers or thumbs, which may be positioned on the various controls of the input device 26. The nature of the transduction componentry and associated controls may differ in the different embodiments of this disclosure. In the embodiment shown in
I/O componentry 42 is configured to take the position data furnished by transduction componentry 40 and convey the position data to computer system 24, where it is offered to one or more processes running on the computer system. Such processes may include a process of OS 34, of any of the applications 36, or of service 38, for example. The nature of the I/O componentry may differ from one embodiment to the next. As shown in
It will be understood that user input may be provided to computer system 24 from other componentry besides input device 26. In embodiments where display 12 is a touch-screen display, for instance, touch input may be received from the touch-screen display. In some scenarios, the touch-screen display may be further configured to present a virtual keyboard or keypad in some user contexts. In these and other embodiments, game system 10 may include one or more cameras or microphones to provide input.
In the embodiment of
In a typical use scenario, transduction componentry 40 of input device 26 transduces the user's hand movement—e.g., the movement of the user's right thumb on right joystick 20R. Useful data of at least two forms can be derived from the transduction. These include:
(a) absolute position data typical of a joystick control, and
(b) relative position data typical of a pointing device (e.g., mouse, trackball, trackpad, or similar control).
Virtualization module 52 may be configured to select the appropriate form for consumption by any process running on computer system 24. More particularly, the user's hand position may be reported as joystick control data in a first mode of operation, and as virtualized mouse data in a second mode of operation. To this end, transduction componentry 40 may include an analog-to-digital converter configured to convert the dual potentiometric output of a joystick control into a pair of digital signals proportionate to the X and Y coordinates of the joystick. The virtualization module may include differentiating logic which computes the derivative of the X and Y coordinates with respect to time or some other process variable. Subject to further processing, such as noise-reduction processing, the derivatives of the X and Y coordinates may be used in the virtualization module to compute ΔX and ΔY values, which are offered to the operating system as virtual-mouse data. In one embodiment, the differentiating logic acts on X and Y data from the right joystick of the input device. In other embodiments, data from the left joystick or both the left and right joysticks may be used.
The inventors herein have explored various mechanisms in which the user of a game system is tasked with intentionally selecting the mode in which to operate an input device—i.e., to provide virtual mouse or joystick input data per user request. In one example scenario, a user playing a video game may operate the right joystick as a joystick to move a character in a video game or reorient the field of view of the character. At some point, however, the user may receive a text message or email alert, or for any other reason decide to switch out of the game to access the home screen of the game system. The home screen—turning back to FIG. 1—may show various icons or other UI elements 54 that the user may navigate among and select in order to launch other applications—e.g., to read email. For this type of navigation, a virtual mouse with a mouse pointer may be an appropriate tool, and the user may be required to flip a switch on the input device (which may require the user to un-pair and then re-pair the input device to the computer system), speak a command, or take some other deliberate, extraneous action to make the input device offer virtual-mouse input to the computer system, instead of the joystick input previously offered. Then, when the user decides to return to the game, this action would have to be reversed. Although a plausible option, this approach may lead to an unsatisfactory user experience by requiring the user to ‘step out’ of the current navigation context to change the operating mode of the input device.
Another option, which provides a more fluid user experience, is to enable virtualization module 52 to monitor conditions within the computer system 24, and based on such conditions, determine the form in which to offer position data to an executing process. In the more particular approach outlined hereinafter, the conditions assessed by the virtualization module may include knowledge of which application has input focus, whether that application is consuming user input as offered by the input device, whether the offering of such input triggers an error, and whether other user-input conditions are detected that heuristically could indicate that the user desires to transition from one form of input to another.
No aspect of the foregoing drawings or description should be understood in a limiting sense, for numerous other embodiments lie within the spirit and scope of this disclosure. For instance, although
The configurations described above enable various methods to provide user input to a computer system. Accordingly, some such methods are now described, by way of example, with continued reference to the above configurations. It will be understood, however, that the methods here described, and others fully within the scope of this disclosure, may be enabled by other configurations as well. Naturally, each execution of a method may change the entry conditions for a subsequent execution and thereby invoke a complex decision-making logic. Such logic is fully contemplated in this disclosure. Further, some of the process steps described and/or illustrated herein may, in some embodiments, be omitted without departing from the scope of this disclosure. Likewise, the indicated sequence of the process steps may not always be required to achieve the intended results, but is provided for ease of illustration and description. One or more of the illustrated actions, functions, or operations may be performed repeatedly, depending on the particular strategy being used.
An example first form of user input may include joystick input, where absolute position coordinates—e.g., Cartesian coordinates X and Y or polar coordinates R and θ—specify position. An example second form of user input is virtual-mouse input, where relative position coordinates—e.g., ΔX, ΔY—specify a change in position over a predetermined interval of time or other process parameter. In some embodiments, the absolute and relative coordinates may be specified programmatically using different data structures: a game-controller data structure for the absolute position data, and a virtual-mouse, trackball, or trackpad data structure for the relative position data. Advantageously, the determinations of method 56 may be made without intentional user action—e.g., without plugging in another device, un-pairing and re-pairing input devices, or flipping a switch to indicate the form of input to be offered.
In multi-tasking environments, numerous processes may run concurrently. While the illustrated method may apply to any such process or processes, it offers particular utility when applied to the so-called ‘foreground process’ (the process having current input focus). At 60 it is determined whether a process running on the computer system conforms to a stored process profile. One or more process profiles may be stored locally in memory subsystem 30A of computer system 24, in memory subsystem 30B of input device 26, or on a remote server. In one embodiment, the process profile may be one in which the first form of user input is indicated (e.g., recommended or required) for every process fitting that profile. In another embodiment, the process profile may be one in which the second form of user input is not indicated (e.g., contraindicated or forbidden). Accordingly, a given process profile may include a listing of processes that are compatible with the first form of user input. In the alternative, a given process profile may include a listing of processes that are incompatible with the second form of user input. If it is determined, at 60, that the active process conforms to any process profile, then the method advances to 62, where the form of input in which to offer the position data is determined based on the profile. In one example, joystick input may be used if the process appears on a ‘white list’ for accepting joystick input, or on a ‘black list’ for accepting virtual-mouse input.
Continuing in
If the process does not encounter an error at 66, then method 56 advances to an optional step 70 and pauses for a predetermined timeout period while it is determined whether the position data offered in the first form has been consumed by the process. At 72, consumption feedback from the computer system is assessed in order to determine whether the position data in the first form was consumed by the process. Such consumption feedback may include a consumption confirmation from the process, which may result in removal of the input event from a queue of unconsumed input events. If it is determined that the position data in the first form was not consumed (within the timeout period, if applicable) then the method advances to 68, where offering the position data in the first form ceases, and where position data in the second form is offered instead. However, if it is determined that the position data in the first form has been consumed, then execution advances to 74 and to subsequent actions where the virtualization module assesses whether any user action indicates, in a heuristic sense, that rejection of the first form of user input is desired, and that the second form of user input should be offered instead.
At 74, for instance, additional hand movements of the user, transduced by the transduction componentry of the input device, are assessed to determine whether conditions warrant rejection of the first form of user input. In one particular example, pushing a certain button on the controller, moving the left joystick or direction pad, etc., may signal that the user wants to re-activate the game-controller aspects of the input device and reject virtual-mouse input. Under these or similar conditions, execution of the method advances to 68, where it is determined that offering the position data in the first form will cease and offering the position data in the second form will commence. Likewise, at 76 it is determined whether user touch is detected on a touchscreen of the computer system—e.g., touchscreen display 12. If user touch is detected, this may be taken as a signal that the user wants to dismiss the virtual mouse.
It goes without saying that method 56 may be executed repeatedly in a given user session to respond to changing conditions. For instance, the method may be used to determine, without explicit user action, that a process with input focus on the computer system is able to consume user input in a form that specifies absolute position, or unable to consume the data in a form that specifies relative position. In that event, position data is offered to the process in the form that specifies absolute position. Some time later, it may be determined, again without explicit user action, that the process with input focus on the computer system is able to consume user input in a form that specifies relative position, or unable to consume the data in a form that specifies absolute position. At this point, the position data may be offered to the process in the form that specifies relative position.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.