The present disclosure relates to translation of position input between two devices, and more particularly to a calibratable translation system for position input by an appendage that is limited by a corresponding joint of a user.
For a graphical user interactive system that includes a pointing device (e.g. a mouse, a touchpad, a touch screen, etc.) and a display device (e.g. a projector screen), positional input from the pointing device is translated to an output position on the display device. For example, the translation may be a linear translation. In other words, the movement of the pointing device is proportional to the movement of position on screen. Thus, if a user moves an appendage in a straight line with respect to the pointing device, the cursor moves in a straight line on the display device. However, there are several problems associated with linear translations.
First of all, due to physical movement limitations of certain appendages of the human body, it may be difficult or impossible to move in certain directions. For example, a thumb may be easier to move along or perpendicular to the line of a fingertip due to the carpometacarpal joint. Conversely, for example, it may be difficult, painful, or impossible to move the thumb in straight lines (i.e. vertical and horizontal). However, most applications require straight horizontal or vertical movements on the screen, either because of the layout of the graphical user interface, or because of the nature of the task (e.g. drawing a straight line in drawing software). When a user is required to move his thumb in a physical straight line, especially horizontal or vertical straight lines, the user may need precise cooperation of several muscles groups and constant visual feedback to adjust the muscles. Furthermore, even with the additional effort, resulting movement on the display device may be poor.
Alternatively, due to psychological reasons, the user may expect movement different than the actual physical movement. For example, when the thumb is moving perpendicular to the line of a fingertip (i.e. rotating about the carpometacarpal joint), the user may think he is moving the thumb horizontally, even though the actual physical movement is an arc. Therefore, using a linear translation, the user may move the pointer to an unintended position. This inaccurate control may require the user to frequently monitor the pointer position displayed on the screen and correct his thumb movement. This may be difficult, painful, and may result in more errors.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
A method for translating a position input by a user to a first device to a position output of a second device includes defining an area of the first device in which input by the user is expected, where the area is less than a total area of the first device, and where the area has a boundary with at least one non-linear side, receiving position input in the defined area of the first device, and translating the position input by the user to the first device to the position output of the second device based on a translation method.
A method for translating a position input from an appendage of a user on a touchpad to a position on a display having a rectangular shape includes defining an area on the touchpad in which input movement by the appendage of the user is expected based upon the natural movement of joints associated with the appendage, receiving the position input in the area on the touchpad, and translating the position input in the area on the touchpad to the position on the display using a translation method.
A calibratable system for translating a position input by a user to a first device to a position output of a second device includes a translation module and a calibration module. The translation module receives the position input by the user to the first device and that translates the position input to position output for the second device based on a plurality of parameters and a translation method. The calibration module that selectively generates the plurality of parameters based on a calibration method that commands the user to move the position input to locations defined by the calibration method.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
The following description is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A or B or C), using a non-exclusive logical or. It should be understood that steps within a method may be executed in different order without altering the principles of the present disclosure.
As used herein, the term module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
A system and methods are presented for calibratable translation of a position input to an input device (e.g. a touchpad) to an output position of an output device (e.g. a display). The translation allows the user to move an appendage (e.g. a thumb) in a trajectory that the user mentally intends to move on the output device. Thus, the user may reach the full space of the output device without having to move the appendage into difficult or painful regions. Therefore, the user may easily move to a target point the user mentally intended to reach on the output device without heavy mental involvement.
Additionally, the calibration allows the user to calibrate the translation based on parameters associated with the user, such as range of movement and size of the appendage. Thus, translation of position input for the calibrated user may be more precise (i.e. improved performance). Alternatively, the calibration allows multiple users to calibrate the translation based on parameters associated with each of them. Thus, each user may access his own calibrated translation. Additionally, a group calibration may be generated by averaging the parameters corresponding to all (or a sub-set of) the users. Thus, the group calibrated translation may be implemented for a group of users (e.g. a family living in a same household).
Referring now to
Referring now to
A user 12 provides position input to the input module 14. For example, the position input may be via a finger or a hand and may be controlled by a joint corresponding to a finger, a wrist, an elbow, or a shoulder. The position input may be described as a series of points or positions that collectively make up input movement. Thus, a translation of each input point may be performed and then output (i.e. one position processed per cycle).
The input module 14 communicates with both the translation module 22 and the calibration module 24. In one embodiment, the user 12 may select one of an “translation mode” and a “calibration mode” via the input module 14, and the input module 14 may then enable one of the translation module 22 and the calibration module 24, respectively.
The translation module 22 receives the position input from the input module 14 and translates the input position to an output position for the output module 16 (i.e. translation mode). The translation module 22 may translate the input position to the output position based on one of a plurality of translation methods using predefined (i.e. default) parameters. Alternatively, the translation module 22 may translate the input position to the output position based on one of the plurality of translation methods using calibrated (i.e. modified) parameters. For example, the parameters may include points corresponding to a maximum range of movement or a size of an appendage of the user. In general, a relationship between an input coordinate (x, y) and an output coordinate (x′, y′) may be described as follows:
(x′,y′)=T(x,y),
where T represents one of the plurality of translation methods (i.e. a function, an algorithm, etc.).
In a first exemplary translation method, the translation module 22 generates a coordinate mesh based one of predefined (i.e. default) parameters and calibrated (i.e. modified) parameters. For example, the coordinate mesh may define an area where input movement by the user is expected, and thus the coordinate mesh may be referred to as a sub-area of the input area of the input device. The coordinate mesh further includes a plurality of cells, and thus one of the plurality of cells includes the input position (i.e the input cell). In one embodiment, the coordinate mesh is defined by one or more non-linear curves (e.g. a spline).
Next, the translation module 22 divides the coordinate mesh into a plurality of cells. In one embodiment, the translation module 22 determines vertices of the cells by offsetting the boundaries (i.e. edges) of the coordinate mesh. For example, the translation module 22 may offset an upper boundary of the coordinate mesh multiple times based on a predefined offset distance to create horizontal grid lines of the coordinate mesh. Additionally, for example, the translation module 22 may offset a left boundary of the coordinate mesh multiple times based on a predefined offset distance to create vertical grid lines of the coordinate mesh. Thus, the horizontal and vertical grid lines may define the plurality of cells.
The translation module 22 may then map the plurality of cells of the coordinate mesh to a corresponding plurality of cells of the output module 16. In one embodiment, the output module 16 may be a rectangular display, and the plurality of cells may be rectangular sub-sets of the rectangular display.
Thus, the translation module 22 may determine which cell of the output module 16 corresponds to the cell of the coordinate mesh that includes the position input. Lastly, the translation module 22 determines distances from edges of the cell of the output module 16, and then determines the position output (within the cell) of the output module 16 based on the distances.
Referring now to
Referring now to
The quad mesh vertices may be described in more detail as follows:
Vi+1,j+1
where i, j correspond to indices of cells in the quad mesh.
In other words, for each vertex Vi,j, an input position may be described as (Vi,jx, Vi,jy) and an output position may be described as (Vi,jx′, Vi,jy′) Therefore, when the quad mesh is rectangular, calculation of the output position (x′, y′) is relatively simple. However, when the quad mesh is irregular (i.e. one or more curved sides), calculation of the output position (x′, y′) becomes more difficult.
In step 34, the translation module 22 maps cells of the output module 16 to cells of the coordinate mesh. In step 36, the translation module 22 determines which cell of the output module 16 corresponds to the position input. More specifically, the translation module 22 searches the coordinate mesh for a cell that includes the position input (x, y). Thus, vertices for this cell may be described as Vi,j, Vi+1,j, Vi,j+1, and Vi+1,j+1.
In step 38, the translation module 22 determines a location within a cell of the output module 16 that corresponds to the position input (x, y). More specifically, the translation module 22 determines distances w1, w2, w3, and w4 from edges of the cell of the output module 16 and then determines the position output (x′, y′) based on the distances. For example, the position output (x′, y′) may be determined based on the following interpolation:
Alternatively, different interpolations may be implemented. For example, a bilinear interpolation or a spline interpolation may be used.
In step 40, the translation module 22 communicates the position output to the output module 16. Control may then end in step 42.
Referring again to
Referring now to
Referring now to
In one embodiment, Point B (i.e. the upper left point) corresponds to point (0, 0). Additionally, point A corresponds to point (W, 0), point C corresponds to point (0, H), and point D corresponds to point (W, H), where W and H are variables corresponding to maximum width and maximum height of input movement.
In step 54, the translation module 22 determines a polar origin point ◯ based on the four corner points A, B, C, and D. For example, the polar origin point ◯ may be determined by determining an intersection point of lines connecting corner points A and D and corner points B and C.
In step 56, the translation module 22 determines five parameters (r1, r2, θ, x0, y0) based on the four corner points (A, B, C, D). Radius r1 may be derived from points A and B because points A and B have the same radial distance from origin point ◯. Similarly, radius r2 may be derived from points C and D because points C and D have the same radial distance from origin point ◯. Additionally, angle θ may be derived based on original point ◯, one of points A and D, and one of points B and C. In one embodiment, the five parameters are generated by the calibration module 24 during a calibration process.
In step 58, the translation module 22 converts the position input (x0, y0) to a polar coordinate (r0, θ0). More specifically, the position input (x0, y0) is translated to a polar coordinate (r0, θ0) relative to origin point ◯.
In step 60, the translation module 22 interpolates the polar coordinates (r0, θ0) to determine the position output (x′, y′). More specifically, the polar coordinate (r0, θ0) may be interpolated as follows:
In step 62, the translation module 22 communicates the position output to the output module 16. Control may then end in step 62.
Referring again to
In a first exemplary calibration method, the user 12 is commanded to move the position input to particular points (e.g. lower left) and/or along particular trajectories (e.g. a curved swipe from the upper right to the upper left). Based on the commanded positions and/or commanded trajectories, the calibration module 24 generates calibrated parameters based on movement limits and movement tendencies of the user 12. In one embodiment, the first calibration method applies to the first translation method.
Referring now to
Referring now to
In step 74, the calibration module 24 determines whether the user 12 has moved the position input to the first corner. If yes, control may proceed to step 76. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return control to step 72.
In step 76, the calibration module 24 commands the user 12 via the feedback module 18 to move the position input from the first corner to a second corner. For example, the second corner may be an upper left corner, and the movement may be a curved horizontal swipe in between the two corners. During the movement from the first corner to the second corner, the calibration module 24 may collect sample points based on a predefined sampling rate.
In step 78, the calibration module 24 determines whether the user 12 has moved the position input to the second corner. If yes, control may proceed to step 80. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 72.
In step 80, the calibration module 24 commands the user 12 via the feedback module 18 to move the position input from the second corner to a third corner. For example, the third corner may be a lower left corner, and the movement may be a vertical swipe between the two corners. During the movement from the second corner to the third corner, the calibration module 24 may collect sample points based on the predefined sampling rate.
In step 82, the calibration module 24 determines whether the user 12 has moved the position input to the third corner. If yes, control may proceed to step 84. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 72.
In step 84, the calibration module 24 commands the user 12 via the feedback module 18 to move the position input from the third corner to a fourth corner. For example, the fourth corner may be a lower right corner, and the movement may be a curved horizontal swipe in between the two corners. During the movement from the third corner to the fourth corner, the calibration module 24 may collect sample points based on the predefined sampling rate.
In step 86, the calibration module 24 determines whether the user 12 has moved the position input to the fourth corner. If yes, control may proceed to step 88. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 72.
In step 88, the calibration module 24 commands the user 12 via the feedback module 18 to move the position input from the fourth corner back to the first corner. For example, the movement may be a vertical swipe between the two corners. During the movement from the fourth corner to the first corner, the calibration module 24 may collect sample points based on the predefined sampling rate.
In step 90, the calibration module 24 determines whether the user 12 has moved the position input to the first corner. If yes, control may proceed to step 92. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 72. In step 92, the calibration module 24 may divide the a boundary area into the plurality of cells (i.e. quad mesh). For example, the calibration module 24 may offset one or more of the boundaries multiple times based on a predefined offset distance. Control mat then end in step 94 (i.e. calibration process complete).
Additionally, in one embodiment, the calibration module 24 may abandon a current calibration operation when a predetermined period of time expires while waiting for the user 12 to move to a commanded point. Thus, the calibration module 24 may restart the calibration operation by commanding the user 12 to move to the first corner (i.e. step 72). Furthermore, in one embodiment, the predefined sampling rate may be adjustable.
Referring again to
Referring now to
Referring now to
In step 104, the calibration module 24 determines whether the user 12 has moved the position input to the first corner. If yes, control may proceed to step 106. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return control to step 102. In step 106, the calibration module 24 samples the position input (position A) corresponding to the first corner and commands the user 12 via the feedback module 18 to move the position input from the first corner to a second corner. For example, the second corner may be an upper left corner.
In step 108, the calibration module 24 determines whether the user 12 has moved the position input to the second corner. If yes, control may proceed to step 110. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 102. In step 110, the calibration module 24 samples the position input (position B) corresponding to the second corner and commands the user 12 via the feedback module 18 to move the position input from the second corner to a third corner. For example, the third corner may be a lower left corner.
In step 112, the calibration module 24 determines whether the user 12 has moved the position input to the third corner. If yes, control may proceed to step 114. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 102. In step 114, the calibration module 24 samples the position input (position C) corresponding to the third corner and commands the user 12 via the feedback module 18 to move the position input from the third corner to a fourth corner. For example, the fourth corner may be a lower right corner.
In step 116, the calibration module 24 determines whether the user has moved the position input to the fourth corner. If yes, control may proceed to step 118. If no, the calibration module 24 may wait for the user 12 to complete the commanded instruction or control may return to step 102.
In step 118, the calibration module 24 determines origin point ◯ based on sampled points A, B, and C. In step 120, the calibration module 24 generates calibrated parameters r1, r2, θ, x0, and y0. Control may then end in step 122.
Additionally, in one embodiment, the calibration module 24 may abandon a current calibration operation when a predetermined period of time expires while waiting for the user 12 to move to a commanded point. Thus, the calibration module 24 may restart the calibration operation by commanding the user 12 to move to the first corner (i.e. step 102).
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the invention, and all such modifications are intended to be included within the scope of the invention.
Note that respective processes in the above embodiments may be executed by a single processing unit or a plurality of processing units. Further, the present invention may be implemented as a device including the single processing unit or the plurality of processing units. For example, the translation module 22 above may be implemented as a translation device.
Further, it may be that the translation device translates a position input by a user to a first device to a position output of a second device and includes: an area defining unit which defines an area of the first device in which input by the user is expected, where the area is less than a total area of the first device, and where the area has a boundary with at least one non-linear side; a receiving unit which receives position input in the defined area of the first device; and a translation unit which translates the position input by the user to the first device to the position output of the second device based on a translation method.
Further, it may be that the translation device translates a position input from an appendage of a user on a touchpad to a position on a display having a rectangular shape and includes: an area defining unit which defines an area on the touchpad in which input movement by the appendage of the user is expected based upon the natural movement of joints associated with the appendage; a receiving unit which receives the position input in the area on the touchpad; and a translation unit which translates the position input in the area on the touchpad to the position on the display using a translation method.
Further, of the elements described in the embodiments, the elements other than the input and output devices, such as a touchpad and a display device, may be implemented by a hardware such as an electronic circuit, a memory and a recording medium, by a program executed by a computer, or by a mixture thereof.
In the case where the present invention is implemented by a hardware, a large scale integration (LSI) that is an integrated circuit is generally used as the hardware. In addition, it may be that the present invention is implemented by a single chip semiconductor integrated circuit, or by a plurality of semiconductor chips being mounted on a single circuit board. Moreover, it may be that the present invention is implemented as a single device including all the elements in a case, or by an association of a plurality of devices interconnected through a transmission path.
In the case where the present invention is implemented by a program, the program is executed using a hardware resource of a computer, such as a central processing unit (CPU), a memory and an input and output circuit. More specifically, functions of the respective processing units are implemented by the CPU, for example, reading data to be processed from the memory for operation, obtaining the data to be processed from the input and output circuit for operation, storing the operation result in a memory temporarily, or outputting the operation result to the input and output circuit.
Further, the present invention can also be implemented as a computer readable recording medium, such as a compact disc read only memory (CD-ROM) storing the program.
This application claims the benefit of U.S. patent application Ser. No. 12/394,304 filed on Feb. 27, 2009. The disclosure of the above application is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US10/25540 | 2/26/2010 | WO | 00 | 7/26/2011 |
Number | Date | Country | |
---|---|---|---|
Parent | 12394304 | Feb 2009 | US |
Child | 13146318 | US |