Not applicable.
Not applicable
1. Field of the Invention
The present invention generally relates to hand held devices with display, and more particularly to the process of selecting a desired area, a marker position, or multiple objects from the contents view associated with the display of the hand held devices.
2. Description of the Related Art
In this specification, I refer to the Area Selection operation as the common user activity performed on information processing devices with visual displays for the purpose of defining and selecting a portion of the contents of a displayed file, or for the purpose of selecting multiple objects represented by icons on the display. The contents of the displayed file may be graphical, text, media, or any other type of data that may be displayed on the device's display.
Area selection within the contents of a displayed file is typically associated with many user interface functions, including Cut and Paste, Drag and Drop, Copy, Highlight, Zoom in, and Delete. Both the Cut and Paste and Copy operations are used to select a portion of the display and copy it into another place of the same display or via the common clipboard onto other active or inactive applications of the device. The Cut and Paste operation causes the originally selected area to be deleted while the Copy operation preserves the originally selected area. The area selection operation within a graphical file is typically selected within a bounding rectangle whose two corners are specified by the user. For text documents, the area selection is a block selection operation, where the selected block is defined between two user selected endpoints placed at two character positions within the text.
For some applications, the area selection operation highlights a portion of the display which is then used as an input for some processing (e.g. speech synthesis, graphical processing, statistical analysis, video processing, etc.). Area selection can be also used to select multiple objects that are not part of a single file, where the individual graphic objects are represented by icons spread across the display.
Desktop systems typically use a pointer device like a mouse or a joystick to select the cut and paste area. Other common techniques include touch screen and voice control selections. When selecting a block of text one can often use pre-assigned keyboard commands.
Hand held devices with a small physical display often must show a virtual stored or a computed contents view that is larger than the screen view of the physical display. Since only a portion of the contents display (also called “virtual display”) can be shown at any given time within the screen view, area selection on hand held devices poses more of a challenge than desktop area selection. This is particularly the case when the desired selected area from the virtual display is stretching beyond the small screen view.
Today's most popular user interface in hand held devices is the touch screen display. The touch screen display enables the user to create single-touch and multi-touch gestures (also called “touch commands”) to navigate (or “scroll”) the display as well as to activate numerous functions and links. There are two main limitations for the touch screen display area selection operation: the setting of the area corners, and the placement accuracy due to the relatively wide finger tip.
When setting area corners for a selected area by touch gestures, one encounters the problem that the touch gesture may inadvertently navigate the screen (or follow a link) instead of placing the corner. Alternatively, touch gestures intended for view navigation may be confused for corner selection during the process. This problem is currently solved by training the user to perform precise and relatively complex touch gestures that attempt to distinguish between navigation commands and corner placement commands. This further poses a major disadvantage for most users who must spend the time to gain expertise in the precise handling of their device touch interface.
U.S. Pat. No. 7,479,948 by Kim et al. describes a method for area selection using multi-touch commands where the user touches simultaneously with several fingers to define a selected area. These unique multi-touch commands limit confusion with view navigation commands, but they are cumbersome and require extensive user training This approach seems to be limited for a selected area that is small enough to be fully enclosed within the screen view of the display. The complexity of using touch commands for area selection is further illustrated in US patent application 2009/0189862 by Viberg, where the operation of moving a word is facilitated into a complex four touch operation.
Another approach that utilizes complex touch gestures is illustrated in the article “Bezel Swipe: Conflict-Free Scrolling and Multiple Selection on Mobile Touch Screen Devices” by V. Roth and T. Turner, In CHI 2009, Apr. 4-9, 2009, Boston, Mass., USA. Bezel Swipe requires an initial gesture that starts with the bezel, a touch insensitive frame around the boundary of the display. From that point, the user touches the screen and moves the finger to select the desired area, ending the selection process by lifting the finger. Solutions like the Bezel Swipe and the patents mentioned above are particularly cumbersome when the desired selected area or objects span beyond the boundaries of the display. Often selection errors are inadvertently made and the user must re-do the selection process.
Touch based area selection of the prior art also face the problem of inaccurate corner points positioning due to the wide contact area between the user's finger and the screen. Stylus devices with sharp tips have been well known to provide accurate positioning of selection points. US patent application 2010/0262906 by Li attempts to solve the problem of distinguishing between area selection commands and view navigation commands. It proposes a special stylus that has a built in key that transmits a special instruction to the device to perform a selection and copy command at the area touched by the stylus. US patent application 2008/0309621 by Aggarwal et al. teaches the use of a proximity based stylus which can interact with the device screen without necessitating that the stylus makes physical contact with the display. The area selection process is started by making a physical contact between the stylus and the display at one corner of the desired selected area. The user then hovers the stylus slightly over the display to navigate to the other corner of the selected area. The two preceding patent applications are disadvantaged by the need of a special active stylus, and they do not perform well when the selected area is much larger than the size of the screen.
U.S. Pat. No. 7,834,847 by Boillot et al. offers a touch-less control of the screen of a mobile device using a sensing system for detecting special movement of the user's fingers in the space above the display. The patent teaches the use of special finger gestures to initiate area selection and cut and paste operations. This solution requires a complex and expensive system for detecting the touch-less finger gestures and it burdens the user with the need of extensive gesture training, which is still prone to errors.
Area selection in hand held devices can be made also by a joystick or special keyboard, as illustrated in US patent application 2006/0270394 by Chin, which uses a multi-stage hardware button to activate special functions like cut and paste. The need of activating different positions of the button creates cumbersome user interface as the button needs continuously be switched from selection mode to view navigation mode.
The view navigation system of a mobile device may utilize a set of rotation and movement sensors (like a tri-axis accelerometer, gyroscope, tilt sensor, camera tilt detector, or magnetic sensor). An early tilt and movement based view navigation system is disclosed in my U.S. Pat. Nos. 6,466,198 and 6,933,923 which have been commercialized under the trade name RotoView. This system is well adapted to navigate the device's screen view across an arbitrarily large contents view and it provides coarse and fine modes of navigation. At fine mode navigation, relatively large orientation changes cause only small view navigation changes. Conversely, at coarse navigation mode, relatively small orientation changes cause large view navigation changes. Later examples include U.S. Pat. No. 7,667,686 by Suh which shows how a selected area from a virtual display may be dragged and dropped. However, the '686 patent completely ignores the problem of area selection which is central to the present invention.
Therefore, it would be desirable to provide methods and systems that can perform area selection on hand held devices with display without the need of sophisticated stylus devices, proximity detectors, or special buttons. Furthermore, it should not require extensive user training and it should be accurate and error free when selecting areas that are either smaller or larger than the display size.
With these problems in mind, the present invention seeks to provide intuitive, convenient, and precise area selection techniques for hand held devices with a small display.
In one embodiment of the present invention, a hand held device with touch screen display uses a combination of both touch screen gestures and tilt and movement based view navigation modes. For normal operation, view navigation can be made by various touch gestures or by tilt and movement based view navigation. During the area selection operation, the device reserves the touch commands only for the selection of the corner points of the selected area. Once the first corner is selected, the device uses tilt and movement view navigation exclusively to reach the general area of the second corner. Once the area of the second corner is reached, the user completes the area selection by touching the desired second corner. This guarantees that corner selection touch gestures may not be wrongly interpreted as view navigation commands.
If the contents view displays text only, the area selection is essentially enclosed between two endpoints along the text. The present invention simplifies the tilt and movement based view navigation to correlate the three dimensional tilt and movement gestures into a linear up/down move along the text and setting the endpoints for the selected text at words boundaries.
In yet another embodiment of the present invention, a special touch gesture provides both initiation of the area selection operation as well as the actual selection of the first corner of the selected area.
The present invention also offers marker repositioning techniques to allow precise adjustment of the corner locations that are placed by the relatively inaccurate touch commands that use the relatively wide finger tip. These techniques can be used to reposition any marker set by a touch command.
Another embodiment of the present invention offers a method for boundary adjustment of a user selected area to reduce the affect of unwanted truncation of contents. Such a contents aware method offers the user an automatic boundary adjustment choice at the end of the area selection process to eliminate the need to repeat the entire process.
These and other objects, advantages, and features shall hereinafter appear, and for the purpose of illustrations, but not for limitation, exemplary embodiments of the present invention are described in the following detailed description and illustrated in the accompanying drawings.
The drawings are not necessarily drawn to scale, as the emphasis is to illustrate the principles and operation of the invention. In the drawings, like reference numerals designate corresponding elements, and closely related figures have the same number but different alphabetic suffixes.
Hand held devices have typically small screens and often need to show information contents that are larger than the size of their displays. They employ a virtual display (also called “contents view”) which is stored in the device memory, while a part of the virtual display is shown in the physical display (“screen view”). In many systems, the virtual display may be dynamically downloaded to the device (e.g. from the internet or externally connected devices) so that at various times only a part of the virtual display is actually stored in the device.
The present invention also incorporates tilt and movement based view navigation, like the system disclosed in my U.S. Pat. Nos. 6,466,198 and 6,933,923 which have been commercialized under the trade name RotoView. Tilt and movement based view navigation essentially translates the user's three-dimensional tilt and movements of the hand held device 40 into scrolling commands along two generally perpendicular axes placed on the surface of the display. Tilt and movement gestures can also be used to move a cursor on the screen. Optional button 44, voice commands, joystick, keyboard, camera based visual gesture recognition system, and other user interface means may be incorporated on the hand held device 40.
In
At
A tilt and movement sensor 108 interfaces with the processor to provide ballistic data relating to the movements and rotations (tilt changes) made by the user of the device. The ballistic data can be used by the micro-controller to navigate the screen view 42 over the virtual display 20. The ballistic data can also be used for cursor movement control. Typically, the tilt and movement sensor 108 comprises a set of accelerometers and/or gyroscopes with signal conversion for providing tilt and movement information to the processor 100. A 6-degree-of-freedom sensor, which comprises a combination of a 3-axis accelerometer and 3-axis gyroscope can be used to distinguish between rotational and movement data and provide more precise view navigation. It should be pointed out that tilt and movement based navigation can be implemented with only accelerometers or with only gyroscopes. Other tilt and movement sensors may be mechanical, magnetic, or may be based on a device mounted camera associated with vision analysis to determine movements and rotations.
The processor 100 can optionally access additional user interface resources such as a voice command interface 110 and a keyboard/joystick interface 114. Another interface resource may be a visual gesture interface 116, which detects a remote predefine visual gesture (comprising predefined movements of the hand, the fingers or the entire body) using a camera or other capture devices. It should be apparent to a person familiar in the art that many variants of the block elements comprising the block diagram of
If step 220 detects a selection touch gesture, the area selection mode is activated at step 224, which may optionally activate a selection indicator or marker on the display, alerting the user that the device is in area selection mode. At step 230 the system converts the gesture defined touch location (e.g., the center point in an ‘x’ shape touch gesture) as the first corner 32 of the selected area at the exact touch location on the portion of said virtual display currently shown on said touch screen display. Once the first area corner 32 is selected, step 232 suspends the set of the TOUCH NAV commands, allowing the tilt and movement based view navigation to work during the following selection of the second corner of the selected area. The suspension of the TOUCH NAV commands is crucial to insure that any kind of touch detection in the following steps will be interpreted solely in the correct context of the area selection process. Step 234 offers an optional corner repositioning that can achieve more precise positioning of the area corner. The optional corner repositioning is described in greater detail below. Optional joystick or keyboard based view navigation may be also allowed to work along with the tilt and movement based view navigation during the area selection process.
The sub-process 238 is used to select the second corner 34 of the selected area. The system processes the tilt and movement based view navigation at step 240. At step 244, a temporary selected area boundary 52 is drawn from the first corner 32 onto a temporary corner 54 at the general center of the screen view 42 as it scrolls the virtual display 20 in response to the tilt and movement based view navigation. At step 250 the system checks for any touch command. If a touch command is not detected, the process continues along steps 240 and 244. If a touch command is detected, the touch location is used as the second corner 34 of the selected area at step 254. Step 256 offers the optional corner repositioning sub-process that achieves more precise positioning of the final selected area's corner. The final selected area 30 is drawn on the virtual display 20. At step 258 the selection mode is deactivated and the set of TOUCH NAV commands is reactivated. Finally, the system provides the selected area information to the calling application as the process ends at step 260.
If step 280 detects a selection command, the selection mode is activated at step 282 and the set of TOUCH NAV commands is suspended as explained earlier. The system now executes steps 286 and 290 to determine the location of the first corner 32 of the selected area. At step 286, the system scrolls the display by tilt and movement based view navigation to reach the desired virtual display area to place the first corner point. Step 286 may optionally activate a blinking marker or an enlarged crosshair marker on the display's center, alerting the user that the device has entered into the selection mode and a selection of the first corner 32 is needed. At step 290 the system checks if a touch was detected. If a touch is not detected, the user continues to navigate for the location of the first corner 32 at Step 286.
If step 290 detects a touch, the system uses the touch location to place the first corner 32 at step 292. Step 294 offers the optional corner repositioning sub-process that achieves more precise positioning of the selected corner 32. The sub-process 238 of
The area selection techniques described above are based on a rectangular boundary that is defined by two opposite corners with a base parallel to the bottom of the display. It should be clear that the teaching of the present invention can be easily extended for area selection that uses other geometrical shapes. In the case of polygon-like shapes that use more than two corners, the extension of the present invention requires orderly repetition of step 238 and 296 in
It appears that for a small area selection which is fully visible within the screen view, one may perform the processes in
Common applications like word processors require area selection from a virtual display that may contain only text. Some of these applications may have a virtual display 20 with text lines widths which are larger than the width of the screen view 42. In such cases the selection of a text block can be made similar to the embodiments of the present invention shown in
The user initiated the text block selection process by a touch gesture at point 70, when the desired section of the text area was shown in the screen view 42. The touch gesture may be shaped as virtual letter ‘x’ and the first endpoint 70 may be selected as the nearest inter words space to the gesture's ‘x’ center location. The system enters text selection mode where the set of TOUCH NAV commands is suspended and the user can use the tilt and movement based view navigation to scroll the display. As the user scrolls the display downwards, a temporary endpoint 72 is placed at or near the center of the screen view 42, and the text block 74 from the starting endpoint 70 to the temporary endpoint 72 is highlighted. Once the desired second endpoint of the selection block 78 appears anywhere on the screen view, the user touches this endpoint's location, and completes the text block selection process.
Since the virtual display 20 is adjusted to fit the width of the screen view 42, there is no need for horizontal navigation of the temporary endpoint 72. Therefore, it is possible to map the two axes view navigation obtained from the tilt and movement sensor into a single axis corresponding along the character list of the text. For a left to right language like English, both roll rotation 64 to the right and pitch rotation down 66 (or movements to the right 65 and down 67) are translated into a downwards text scrolling. Roll rotation to the left and pitch rotation up are similarly translated into an upwards text scrolling. For a right to left language like Hebrew, both roll rotation 64 to the left and pitch rotation 66 down are translated into downwards text scrolling. Roll rotation to the right and pitch rotation up are similarly translated into an upwards text scrolling. The tilt and movement based view navigation of the present invention is particularly useful when the length of the text block is longer than the height of the screen view 42.
If step 320 detects a selection gesture, the text selection mode is activated at step 324, which may optionally activate a selection indicator or marker on the display, alerting the user that the device is in a text selection mode. The set of TOUCH NAV commands is suspended at step 324 as explained earlier. At step 328 the system uses the finger touch location (e.g., the center point in an ‘x’ shape touch gesture) as the first endpoint 70 of the text block selection. The system may set the block endpoint at the inter words space nearest to the gesture location.
The system now executes steps 340, 344, 354, 358, 362 and 366 to allow the user to select the second endpoint for the selected block. Steps 340 and 344 detect the user tilt and movement based view navigation commands and steps 354 and 358 respond to these commands by scrolling the text up or down. Assuming the text language is English, if at step 340 the system detects a tilt and movement up or to the left, it scrolls the text list of characters up at step 354. If at step 344 the system detects a tilt and movement down or to the right, it scrolls the text list of characters down at step 358. After each scrolling action, step 362 sets the temporary endpoint 72 generally towards the screen view center and the block of text 74 between endpoints 70 and 72 is highlighted.
At step 366 the system checks for a touch command. If a touch command is not detected, the scrolling process described in the previous paragraph is repeated. Once a touch is detected, the finger touch location is used as the second endpoint 78 of the selected block at step 370. The system may set the endpoint 78 at the inter words space nearest to the finger touch location. The final text block selection is highlighted on the virtual display. At step 374 the text selection mode is deactivated, and the set of TOUCH NAV commands is reactivated. The system provides the selected text block information to the calling process as the process ends at step 380.
Referring back to
When the user reaches the exact corner point position, she lifts her finger 46 from the screen 42, as shown in
In another embodiment of the present invention, the user can perform corner repositioning using tilt and movement based cursor control set at a fine navigation mode, as illustrated in
A corner repositioning elapsed timer may optionally be started at step 406. Step 408 activates the tilt and movement based cursor control to move the crosshair marker. The tilt and movement based cursor control is set to fine response mode which translates relatively large tilt and movements of the hand into small movements of the crosshair cursor. The system performs the corner repositioning via the loop of steps 410, 412 and 414. At step 410, the system continuously uses tilt and movement based cursor control set at a fine navigation mode to move the crosshair. Fine navigation mode causes relatively large movements and tilt changes to make fine movements of the crosshair, hence the increased placement accuracy. Corner repositioning mode can be terminated by a touch command, detected at step 412, or at the expiration of the optional timer at step 414.
Once the corner point is placed at the exact desired location, the user touches the screen at the vicinity of the corner point to end the corner repositioning mode by step 412. It should be noted that the exact location of the touch that ends the corner repositioning mode does not change the crosshair marker position. The position of the crosshair marker is fixed and replaced by the final corner at step 416, and the corner repositioning mode is reset at step 418. This completes the repositioning process at step 420.
Another embodiment of the present invention provides automatic boundary adjustment for the area selection to reduce the effect of unwanted truncation of the contents within the selected area. This contents aware area boundary adjustment helps to avoid the need to repeat the area selection process. This embodiment of the present invention is applicable to any computerized system with any type of display where area selection operation is performed.
In step 444, the contents of the input area boundary and its immediate surrounding area are decomposed into recognizable shapes and put into the shapes list. These recognizable shapes include primitive geometric shapes as well as more complex shapes. Complex implementations may utilize advanced expert systems techniques known to the art which provide learning capabilities and dynamically expanding the database of recognizable shapes. Such dynamic update methods may add unrecognized shapes remaining after the decomposition process, possibly following a connectivity analysis to determine that the unrecognized shape/s create a unique aggregation of a new shape.
If the decomposition process at step 444 fails, the system is adapted to abort the automatic correction program at step 445. A failure of the decomposition process occurs if there are no recognizable shapes detected within the input boundary or if there are too many recognizable shapes above a certain overflow limit. A copy of the complete shapes list is retained at step 446 for subsequent connectivity analysis. Every shape in the recognizable shapes list is analyzed in step 450 to determine if it is truncated by the input area boundary. Each shape that is not truncated is removed from the shapes list. Step 454 checks if the shapes list is empty. If the list is empty, there is no need to adjust the boundary since there are no recognized truncated shapes, and the program ends at step 480.
If step 454 finds that the shapes list is not empty, the program runs a connectivity analysis of each truncated shape in the recognizable shapes list at step 458. Here the program uses the copy of the full recognizable shapes list made at step 446 to determine if the truncated shape is connected to any other shapes within the input area boundary. Truncated shapes that are not connected (like shapes 96 and 98 in
If the recognizable shapes list is not empty, the program proceeds to adjust the modified boundary along steps 464, and 465. At step 464, the program removes a connected truncated shape from the shapes list and attempts to increase the modified boundary until it encloses the truncated shape. If the currently increased boundary does not reach the end of the virtual display or does not exceed a preset limit, the currently increased boundary replaces the last modified boundary. Otherwise, the last modified boundary is restored and the process continues, recognizing that the just removed shaped will remain truncated. This may result in a partial correction which still achieves the objective to reduce the number of truncated shapes. Step 465 causes step 464 to repeat until the recognizable shapes list becomes empty, so that step 464 may continuously increase the modified boundary to enclose as many truncated and connected shapes as possible.
When the recognizable shapes list is finally empty, step 466 compares the modified boundary and the input area boundary. If the modified boundary remains the same as the input area boundary, the process aborts. If the modified boundary has changed, step 470 displays the larger modified boundary 31 together with the originally selected area 30 in
The corner repositioning method described above can be extended for use with any marker placed inaccurately on a hand held device with a touch screen display due to the inherent thickness of the finger tip.
If the repositioning command timer expires, the program quits without performing the repositioning. Alternatively, the period of time during which the system waits for the repositioning command may be terminated by a user touch command, detected at step 514. If this alternative approach is taken, the touch command to terminate the period must be different than any touch gesture that may be used for the reposition command. If the marker repositioning command is not a touch gesture, then any touch command detected at step 514 will quit the program without performing the marker repositioning. A combination of both timer expiration and touch termination command can work well with the present invention.
The description above contains many specifications, and for purpose of illustration, has been described with references to specific embodiments. However, the foregoing embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Therefore, these illustrative discussions should not be construed as limiting the scope of the invention but as merely providing embodiments that better explain the principle of the invention and its practical applications, so that a person skilled in the art can best utilize the invention with various modifications as required for a particular use. It is therefore intended that the following appended claims be interpreted as including all such modifications, alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
This application claims the benefit of provisional patent application Ser. No. 61/470,444, filed 2011 Mar. 31 by the present inventor, which is incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61470444 | Mar 2011 | US |