Systems and Methods for Interacting with a Computer Through Handwriting to a Screen

Information

  • Patent Application
  • 20130271409
  • Publication Number
    20130271409
  • Date Filed
    June 11, 2013
    11 years ago
  • Date Published
    October 17, 2013
    11 years ago
Abstract
Systems and methods are described that enable a user to: select a control with a handwritten stroke at least part of which resides outside of a selectable area of the control; use a moving-input control without having to make a selection other than handwriting on, over, or near the control; and/or delete text displayed on an electronic form by handwriting over that text.
Description
BACKGROUND

Many computing devices, such as hand-held computers, PDAs, and Palm Pilots™, enable users to interact with the device by handwriting over the device's screen. This handwriting may be converted into text or a command that the device can understand.


Interacting with a computer through handwriting, however, can be counter-intuitive and problematic. Take, for instance, how users often select a control, such as a check box or radio button. Users may select a check box by “tapping” a stylus point within the box. Tapping within the box can be counter-intuitive because tapping may have to be learned; it is not like writing on a paper form, with which most users are already comfortable. Also, tapping to select a check box can be difficult on a small screen as the box into which a user taps may be quite small.


Take also, for instance, how users often interact with moving-input controls, like drag-and-move or drawing controls. When a user is handwriting in a mode that allows the handwriting to be interpreted as text, a user may none-the-less want to draw or use a control having a moving input. To do so, often a user must “tap-and-hold” the control. Suppose, for example, that a user is attempting to handwrite text into an existing word-processing document. Suppose also that the user wishes to scroll down to a particular place in the document. To do so, the user can use a slider-bar control. To use this control and scroll through the document, often the user must tap on the slider-bar and hold that tap down until the computer recognizes that the user is attempting to use the slider-bar rather than enter text. Having to tap and hold a control before using it can be counter-intuitive and difficult, especially for small controls on small screens.


These and similar problems can make interacting with computing devices through handwriting difficult and/or counter-intuitive.


SUMMARY

Systems and methods (“tools”) are described that, in at least some embodiments, make more intuitive and/or effective interacting with a computing device through handwriting.


In some embodiments, for instance, these tools enable a user to select a control with a handwritten stroke at least part of which resides outside of a selectable area of the control.


In other embodiments, for instance, these tools enable a user to use a moving-input control without having to make a selection other than handwriting on, over, or near the control. In doing so, the tools may determine that the user intends the handwriting to be treated as input to a moving-input control rather than recognized as text.


In still other embodiments, for instance, the tools enable a user to delete text displayed on an electronic form by handwriting over the text.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary architecture having a computing device and exemplary applications and a screen shot illustrating an exemplary data-entry form.



FIG. 2 sets forth a flow diagram of an exemplary process for enabling a user to select a control.



FIG. 3 illustrates the exemplary data-entry form of FIG. 1 and a screen shot, the screen shot showing the data-entry form after handwriting has been received and displayed.



FIG. 4 illustrates the exemplary screen shot of FIG. 3 and a bounded writing area for the handwriting.



FIG. 5 illustrates the exemplary data-entry form of FIG. 1 and a screen shot showing the data-entry form after handwriting has been received and displayed.



FIG. 6 sets forth a flow diagram of an exemplary process for enabling a user to use a moving-input control.



FIG. 7 illustrates the exemplary data-entry form of FIG. 1 and a screen shot showing a path of handwriting made on the data-entry form.



FIG. 8 illustrates the exemplary data-entry form of FIG. 1 and a screen shot showing a scroll down from the screen shot shown in FIG. 7.



FIG. 9 sets forth a flow diagram of an exemplary process for enabling a user to delete text displayed on a screen by handwriting over that text.



FIG. 10 illustrates the exemplary data-entry form of FIG. 1 and a screen shot showing text displayed in a data-entry field of the form with handwriting displayed over some of the text.





The same numbers are used throughout the disclosure and figures to reference like components and features.


DETAILED DESCRIPTION
Overview

Systems and methods (“tools”) described below can, in at least some embodiments, make more intuitive and/or effective interacting with a computing device through handwriting.


In one embodiment, for instance, a user is able to select a control with a handwritten stroke at least part of which resides outside of a selectable area of the control. By so doing, users may to select a control without needing to tap inside a box or button of the control.


In another embodiment, for instance, a user is able to use a moving-input control without having to make a selection other than handwriting on, over, or near the control. The tools may determine, based in part on a geography of a user's handwriting, that the user intends the handwriting to be treated as input to a moving-input control rather than recognized as text.


Also, the tools may enable a user, in still another embodiment, to delete text displayed on an electronic form. The user may be able to delete text in a data-entry field, for instance, by handwriting over the text in the field.


Exemplary Architecture

Referring to FIG. 1, an exemplary system/architecture 100 is shown having an exemplary computing device 102 with a processor 103, a tablet screen 104, a stylus 106, a data-entry form 108, and computer-readable media comprising: auto selector application 110; moving-input selector application 112; and scratch-out selector application 114. This architecture 100 and its components are shown to aid in discussing the tools but are not intended to limit their scope or applicability.


The computing device comprises hardware and software capable of communicating with or executing the auto selector, the moving-input selector, and/or the scratch-out selector. The computing device is also capable of communicating with a user through the tablet screen. The tablet screen is capable of presenting this and/or other data-entry forms to a user and receiving input from the user. The tablet screen can receive input from a user handwriting over the tablet screen with the stylus, for instance. Other types of screens and input manners may also be used. In another embodiment, a display screen is used that displays handwriting not necessarily written directly over the display screen itself In this embodiment, the architecture is capable of receiving handwriting from a user through another device (not shown) that is made to, but not over, the display screen, such as handwriting made with a mouse.


Data-entry form 108 comprises multiple data-entry fields and text explaining them. It is, however, just one example of many types of user-input manners that may be used herein. Other types of user-input manners may comprise dialogs, such as those for saving a file, selecting an option, entering information, and the like; word-processing documents; tables; and other manners capable of enabling receipt of input from a user.


The auto selector, moving-input selector, and scratch-out selector applications may operate separately or in combination and comprise computer-readable media executable by a computing device, such as computing device 102. These applications are capable of performing various acts described below.


Enabling a User to Select a Control


FIG. 2 shows an exemplary process 200 for enabling a user to select a control, such as with handwriting at least part of which resides outside of a selectable area of the control. This process is illustrated as a series of blocks representing individual operations or acts performed by elements of architecture 100, such as auto-selector 110. This and other processes described herein may be implemented in any suitable hardware, software, firmware, or combination thereof In the case of software and firmware, these processes represent sets of operations implemented as computer-executable instructions.


At block 202, the architecture displays a data-entry form having selectable controls. These selectable controls can comprise radio buttons, check boxes, and the like. Each control may have a selectable area, such as the box of a check box or a button of a radio button, through which a user may select the control by tapping a stylus point within the selectable area. Often, if the selectable area indicates that the control has already been selected, the user's selection acts to deselect the control; both selection and de-selection may represent a user's selection of the control. For radio buttons, for instance, selecting or deselecting one of the buttons may be treated as a selection or de-selection of another of the radio buttons.


A display of exemplary selectable controls is shown in FIG. 1. There, various examples of controls having selectable areas are shown, including a check box control 116 and four radio button controls 118, 120, 122, and 124. The check box control has a selectable box area 126. The radio button controls have selectable button areas 128, 130, 132, and 134.


At block 204, the architecture receives handwriting. This handwriting can comprise one or more handwriting strokes made to a screen by a user. The auto selector can receive the handwriting stroke from various devices or software, such as directly from tablet screen 104.


As an example, consider FIG. 3. There, data-entry form 108 and a screen shot 300 showing the data-entry form after handwriting 302 has been received and displayed over the form is shown. Here, the handwriting is a single stylus stroke received from a user through tablet screen 104. In another embodiment, the handwriting is received from a user through another device, such as a mouse that enables handwriting to be made to a screen without necessarily requiring a user to handwrite over that screen.


At block 206, the auto selector may determine if the handwriting is near a selectable area of a control. By so doing, the auto selector can determine that a user may intend to select a control even if the handwriting received does not initiate within a selectable area, cease within the selectable area, and/or intersect the selectable area. Thus, unlike the “tap” stroke described in the Background section above, handwriting may be used to select a control without a user's stylus having to intersect or start, stay, and/or stop in the selectable area.


In some cases the architecture may determine that the handwriting intersects the selectable area of the control and also is of a certain type. If this type comprises one or more handwritten strokes intended to delete information (such as those set forth at block 908 of process 900 below), the auto selector may de-select the control at block 212 (described below).


Continuing the illustrated embodiment, handwriting 302 is determined to intersect selectable box area 126 shown in FIG. 3.


At block 208, alternatively or additionally to block 206, the auto selector geometrically bounds at least a portion of the handwriting received, thereby generating a bounded writing area. The auto selector may, for instance, bound a beginning, middle, and/or end of the handwriting received. In one embodiment, a bounding-type algorithm is used.


Consider, for example, FIG. 4. In this illustrated embodiment, the auto selector computes a bounded writing area 402 for handwriting 302 shown over data-entry form 108. This is shown for explanation, and so may not be shown to a user. In this example, the bounding-type algorithm generates a rectangle—although any suitable shape may be used.


At block 210 (FIG. 2), the auto selector compares the bounded writing area with control geometries for selectable controls. These control geometries can comprise selectable areas of the controls, such as selectable box area 126 of FIGS. 1, 3, and 4. These control geometries can also comprise areas associated with the control, such as an area occupied by text describing the control.


Continuing the illustrated and described embodiment of FIG. 4, the auto selector compares the bounded writing area occupied by the bounding rectangle against selectable areas of data-entry form 108, such as selectable box area 126 and selectable button areas 128 through 134. In the illustrated example, all of the selectable box area overlaps the bounding rectangle.


At block 212, the auto selector selects the control. In two of the above-described embodiments, the auto selector selects the check box control 116. In one of these embodiments, it does so because the handwriting intersects the selectable box area (see block 206).


In another, it does so by comparing control geometries for selectable controls with a bounded writing area for the handwriting. In this embodiment (see FIG. 4), the auto-selector compares the overlapping areas, picking the field with the largest overlap (here only the selectable box area overlaps with the bounded writing area). If there is no overlap, the auto selector can, for example, select a selectable control (or data-entry field) closest to the bounding rectangle, ignore the handwriting, or inform the user as to how to select controls through a dialog box.


The auto-selector may, however, balance and/or rely on both of these manners of selecting a control.


As another example, consider FIG. 5. There, data-entry form 108 and a screen shot 500 showing the data-entry form after handwriting 502 has been received and displayed over the form is shown. Here, the handwriting is a single stylus stroke roughly comprising a circle. Following the above process 200, the auto selector enables a user to select a control and receives the handwriting 502 to that end. The auto selector may then follow block 206 and/or blocks 208 and 210 before proceeding to block 212. In this example, the handwriting intersects selectable button area 128. On this basis alone, the auto-selector may select the corresponding button control 118. The auto-selector may also determine that a bounded writing area of the handwriting overlaps much more of the selectable button area 130 of the button control 120 than that of the area 128. The auto-selector may balance these conflicting manners of selecting a control, in this embodiment by selecting button control 120. As this example shows, the architecture enables a user to select a control without tapping on the control and without the handwriting of the user intersecting that control.


Returning to process 200, the architecture may indicate its selection graphically (not shown), such as by placing an X or check mark in a check box or coloring in a radio button.


Enabling Automatic Use of a Moving-Input Control


FIG. 6 shows an exemplary process 600 for enabling a user to use a moving-input control, such as a slider-bar or drawing control, without having to make a selection other than handwriting on, over, or near the control. This process is illustrated as a series of blocks representing individual operations or acts performed by elements of architecture 100, such as moving-input selector 112.


At block 602, architecture 100 displays a data-entry form having moving-input control(s). Each control has a moving-input area through which a user may interact with the control. A drawing control, for instance, may comprise a drawing space for receiving a user's input to make a drawing. A slider-bar control may comprise a scrolling area for receiving a user's input.


For example, consider FIG. 7. FIG. 7 shows a screen shot 700 of data-entry form 108 having a slider-bar control 702. The slider-bar control comprises a scrolling area 704, in which slider bar 705 may slide, for receiving a user's input to scroll through the form.


At block 604, the architecture (e.g., moving-input selector 112) determines regions of a screen into which a moving input to a control may be made. These regions may map exactly or substantially to moving-input areas that are displayed, such as the scrolling area shown in FIG. 7. The architecture may identify these regions geographically, such as by which pixels occupy the regions, for instance.


Following block 604, two exemplary embodiments of the process 600 are described. The first embodiment is described as part of blocks 606 and 608. The second embodiment is shown with dashed lines in FIG. 6 and described as part of blocks 610, 612, 614, and 608.


At block 606, the architecture interprets handwriting received to a region determined to permit moving input as moving input to a control associated with that region. The architecture may do so based on where handwriting input begins, for instance. Thus, if handwriting begins within the moving-input region, it may be interpreted as input to the moving-input control. Conversely, if handwriting is begun outside of the region but then intercepts the region, it may not be interpreted as input to the moving-input control. In this case, the tools enable a user to have his or her handwriting interpreted as text or moving input without the user having to make another selection other than where the user begins handwriting. As part of or preceding block 606, the handwriting may be received while in a mode permitting handwriting to be interpreted as text.


The architecture enables this region to be used to input text or moving input with handwriting without additional user interaction, such the user selecting to switch away from a mode generally for interpreting handwriting as text or tapping and holding on a control.


The region determined to permit moving input may map exactly or approximately to an area or graphic associated with the moving-input control. In the illustrated example, the region maps to an area occupied by scrolling area 704 of FIG. 7. In this case, a user may handwrite over the scrolling area and have his or her handwriting be interpreted as text or as moving input, based on whether or not the handwriting began in the scrolling area.


At block 608, the architecture inputs the interpreted handwriting to a moving-input control. The effect of this input is shown in FIG. 8, where screen shot 800 shows the form scrolled down from its previous position (the handwriting input by the user to the slider-bar control is not displayed). The architecture can input the interpreted handwriting continuously, enabling in this case the form to be scrolled down contemporaneously with the user's handwriting, or discontinuously.


Additional handwriting to the screen may be interpreted as moving input, text, or otherwise. If the user writes another handwriting stoke on the screen, it may be interpreted in a same or different way. Thus, a user may handwrite for interpretation as text, then handwrite for interpretation as a moving input (such as described above), and then go back to handwriting for interpretation as text, all without having to make additional input other than the handwriting itself.


The second embodiment of process 600 follows blocks 610, 612, 614, and 608. At block 610, the architecture receives handwriting while in a mode permitting the handwriting to be interpreted as text. This handwriting may be communicated between elements of the architecture, such as between tablet screen 104 and the moving-input selector, and may comprise indicia for handwriting strokes recognizable as text or otherwise.


Also, the handwriting and/or its indicia may comprise a first portion of a handwriting stroke that is being received, such as a first pixel of the handwriting stoke. This handwriting can be received while in a text-permitting mode; it does not have to be received in a mode in which handwriting is generally not interpreted as text, such as when a user selects out of a text-permitting mode by tapping and holding on a control. At this block 610, the architecture may receive only a small portion of the handwriting eventually received before proceeding to select, interpret, and/or input the handwriting to a control, as set forth in blocks 612, 614, and 608 described herein.


To illustrate a handwriting stroke all of which has been received, consider handwriting 706 of FIG. 7. This handwriting is shown with a dotted stroke to illustrate handwriting input from a user, though moving-input selector 112 may instead not show the handwriting other than through its effect on a control, such as by having slider bar 705 move and the electronic form scroll. If this handwriting were to be interpreted as text, it could be displayed as a solid-line stroke and might be interpreted as an “I”, i.e., text, rather than as moving-input to slider-bar control 702. Moving-input selector 112 may instead interpret the handwriting as input to the slider-bar control. In this case, the handwriting may not be displayed and the handwriting may immediately be used as input to the moving-input control.


At block 612, the moving-input selector selects, responsive to the handwriting received, a moving-input control. This handwriting received may comprise the first portion of the handwriting stroke being received. The moving-input selector can make this selection based on a geographic relation between the handwriting and a region of a screen into which a moving input to a control may be made. This geographic relation can be based on the handwriting intersecting or residing near one of these regions. Alternately or additionally, a small or first-received portion of the handwriting, such as the first pixel, can be analyzed to make the selection. By so doing, the moving-input selector can select the control quickly and enable future-received handwriting, such as a remaining portion a handwriting stroke, to quickly be used as input to the selected control.


The moving-input selector selects the slider-bar control based on a determination that the start point of the handwriting intersects the scrolling area of the slider-bar control, the effect of which is shown with the illustrated example (the illustrated example also shows effects of other embodiments, such as the first embodiment of process 600).


The moving-input selector determines a geographic relation between handwriting 706 and scrolling area 704 of the slider-bar control. In this case, a start point 708 of handwriting 706 (shown in FIG. 7) is compared with the scrolling area and found to intersect it. In other embodiments, however, handwriting may begin outside the moving-input area and then intersect the moving-input area. How quickly the handwriting intersects or a distant between the start point and a first intersection point may be used to determine whether or not the user intends his or her handwriting to be interpreted as input to a control rather than text.


In one embodiment, for instance, handwriting begun within three pixels or one millimeter (whichever is more) that intersects a moving-input area within another six pixels or two millimeters (whichever is more) is interpreted as a moving input to the control having this moving-input area.


In still other embodiments, the moving-input selector may use a bounding-type algorithm to compute a bounded writing area (e.g., a bounding rectangle) of part or all of a handwriting. The moving-input selector can then compare this bounded writing area with regions of the screen into which the handwriting is made to make a selection.


At block 614, the architecture, responsive to the selection of the control, interprets handwriting as input to that control. The architecture can interpret the handwriting received and used to make the selection as input to the control (e.g., the first portion of the handwriting stroke), additional handwriting received after making the selection (e.g., a second portion or remainder of the handwriting stroke), or both. The architecture may do so without reliance on input from a user other than the handwriting itself; in other words, the architecture may interpret handwriting as input to a moving-input control without a user having to first select the control or select that his or her handwriting not be interpreted as text, such as with a tap-and-hold input.


The moving-input selector selected the slider-bar control based on a determination that the start point of the handwriting intersects the scrolling area of the slider-bar control. Responsive to this selection, the architecture interprets handwriting received after the start point that is part of the same handwriting stroke as a command to the control and thus scrolls down through the electronic document.


At block 608, the architecture inputs the interpreted handwriting to the moving-input control, in this case after selecting the moving-input control. The effect of this input is shown in FIG. 8.


The receiving done at block 610, the selecting done at block 612, the interpreting at block 614, and the inputting at this block may be performed quickly and automatically. By so doing, the architecture may receive a first and/or small portion of a user's handwriting and, as handwriting is continuing to be received, select a moving-input control into which to input the handwriting as it is received.


In one embodiment, the actions described in blocks 610, 612, 614, and 608 or 606 and 608 are performed automatically and/or seamlessly; the user simply strokes his or her stylus along a slider bar and sees the slider bar move and the electronic form scroll. Thus, without requiring a user to tap and hold over a moving-input control, the tools may automatically select a control and treat as moving input to that control the user's handwriting.


Enabling a User to Delete Text


FIG. 9 shows an exemplary process 900 for enabling a user to delete text displayed on a screen. This process is illustrated as a series of blocks representing individual operations or acts performed by elements of architecture 100, such as scratch-out selector 114.


At block 902, architecture 100 displays text, such as letters or numbers, on a screen. For purposes of the process 900, the text may be displayed as part of a structured or unstructured electronic document, such as tables, data-entry forms having data-entry fields, word-processing documents, and/or dialog box fields. This text may have been converted from prior handwriting or otherwise. This text is not, however, handwriting that has not yet been recognized and converted into text.


At block 904, the architecture receives handwriting at least part of which is made over the displayed text. The handwriting may comprise a single handwriting stroke or multiple strokes. Also, in one embodiment, the handwriting is received without the user having to first select a data-entry field in which the text is displayed or otherwise indicate a cursor location in the field. In this embodiment, the user may simply handwrite over text.


Consider, for example, FIG. 10. In this figure, a screen shot 1000 of electronic data-entry form 108 is presented having handwriting 1002 over text 1004 in a data-entry field 1006.


At block 906, the scratch-out selector, responsive to handwriting being made over the displayed text, selects at least part of the text. This selected text can comprise multiple characters, a single or multiple words, a single or multiple sentences, and the like. In the illustrated embodiment, the selected text is a single word 1008 (“James”). The scratch-out selector may determine what part of the text is selected without interaction with the user other than the handwriting received. Thus, the user need only handwrite over the text that he or she wishes to delete; the user does not need to select the text before handwriting over it.


At block 908, the scratch-out selector determines whether or not the handwriting received is for deleting information, such as the selected text. The scratch-out selector may determine if the handwriting is for deleting text without interaction with the user other than the handwriting received; the user does not need to perform another action besides the handwriting, such as selecting to delete the text before or after handwriting over it.


The scratch-out selector can analyze the handwriting to determine if it is of a type that a user might make in deleting or obscuring something on a paper page. A person writing on paper might, for instance, make a back-and-forth motion with an eraser to delete a word or mark from the page. Similarly, a person might attempt to obscure a word or mark on a page by scribbling over it or scratching it out.


In the illustrated embodiment, the scratch-out selector treats handwriting that represents a continuous back-and-forth motion as handwriting for deleting text. In the case of handwriting generated over a tablet screen with a stylus, this continuity represents a single back-and-forth stroke made without the stylus being lifted or resting for a significant period.


In another embodiment, the scratch-out selector bases its determination on whether the computer-displayed representation of the handwriting obscures a significant portion of the selected text, such as about a twenty percent or more. This handwriting may comprise multiple handwriting stokes, such as when a user lifts a stylus and then continues handwriting to further obscure the text.


In still another embodiment, the scratch-out selector determines that the handwriting is for deleting text if it comprises two or more roughly parallel lines residing substantially over the selected text. These roughly parallel lines may be made with two handwriting strokes, for instance, such as by the user writing one line and then another over text.


In an embodiment mentioned previously as part of the process 200, the scratch-out selector determines that handwriting received is intended to delete or de-select information other than text. A check box or radio button, for instance, that has information indicating that it is selected (such as an X in a check box or a filled-in button on a radio button) may be de-selected based on this determination.


At block 910, the architecture, responsive to determining that the handwriting is for deleting text, deletes the selected text. Continuing the illustrated embodiment, the word 1008 is then deleted from the data-entry field (not shown).


Conclusion

The above-described tools enable a user's interaction with a computing device through handwriting to be more intuitive and/or effective. Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed invention.

Claims
  • 1. A computer-implemented method comprising: receiving a handwriting stroke made to a screen that is displaying a control having a selectable area; andselecting the control based on a position of the handwriting stroke relative to the selectable area of the control.
  • 2. The computer-implemented method of claim 1, wherein at least a portion of the handwriting stroke is received outside of the selectable area of the control.
  • 3. The computer-implemented method of claim 1, wherein the handwriting stroke begins outside of the selectable area of the control.
  • 4. The computer-implemented method of claim 1, wherein the handwriting stroke ends outside of the selectable area of the control.
  • 5. The computer-implemented method of claim 1, wherein the selecting the control further comprises selecting the control responsive to determining that the handwriting stroke intersects the selectable area of the control.
  • 6. The computer-implemented method of claim 1, wherein selecting the control further comprises: generating a bounded writing area around at least a portion of the handwriting stroke made to the screen;comparing the bounded rectangle to control geometries associated with the control; andselecting the control if the bounded writing area overlaps the control geometries associated with the control.
  • 7. The computer-implemented method of claim 6, wherein the control geometries associated with the control comprise the selectable area of the control.
  • 8. The computer-implemented method of claim 6, further comprising, if the bounded writing area does not overlap the control geometries associated with the control: selecting the control if a distance of the bounded writing area to the control geometries associated with the control is less than an additional distance of the bounded writing area to additional control geometries associated with an additional control; orselecting the additional control if the additional distance of the bounded writing area to the additional control geometries associated with the additional control is less than the distance of the bounded writing area to the control geometries associated with the control.
  • 9. The computer-implemented method of claim 1, wherein selecting the control further comprises: generating a bounded writing area around at least a portion of the handwriting stroke made to the screen;comparing the bounded writing area to control geometries associated with the control and to additional control geometries associated with an additional control; andselecting the control if an overlap of the bounded writing area to the control geometries associated with the control is larger than an additional overlap of the bounded writing area to the additional control geometries associated with the additional control.
  • 10. The computer-implemented method of claim 9, further comprising selecting the additional control if the additional overlap of the bounded writing area to the additional control geometries associated with the additional control is larger than the overlap of the bounded writing area to the control geometries associated with the control.
  • 11. The computer-implemented method of claim 10, wherein selecting the control or selecting the additional control is based at least in part on a beginning position of the handwriting stroke made to the screen.
  • 12. The computer-implemented method of claim 1, wherein the selecting the control further comprises selecting the control without user interaction independent of the handwriting stroke received and while in a mode permitting the handwriting stroke to be interpreted as text.
  • 13. The computer-implemented method of claim 1, wherein the control comprises a moving-input control, the method further comprising interpreting the handwriting stroke as input to the moving-input control.
  • 14. The computer-implemented method of claim 13, further comprising, responsive to interpreting the handwriting stroke as input to the moving-input control, causing movement of the screen based on the input.
  • 15. The computer-implemented method of claim 1, wherein the control comprises a check box or a radio button.
  • 16. The computer-implemented method of claim 1, wherein handwriting stroke is received via a stylus.
  • 17. A computing device comprising: a screen;a processor; andcomputer-readable storage media comprising instructions stored thereon that, responsive to execution by the processor, perform a method comprising: causing display of one or more controls on the screen;receiving a handwriting stroke made to the screen;selecting, without user interaction independent of the handwriting stroke received and while in a mode permitting the handwriting stroke to be interpreted as text, the control.
  • 18. The computing device as recited in claim 17, wherein the instructions, responsive to execution by the processor, perform a method further comprising interpreting the handwriting stroke as input to the selected control.
  • 19. A computer-implemented method comprising: causing display of one or more controls on a screen;receiving a handwriting stroke made to the screen;selecting, without user interaction independent of the handwriting stroke received and while in a mode permitting the handwriting stroke to be interpreted as text, the control; andinterpreting the handwriting stroke as input to the selected control.
  • 20. The computer-implemented method as recited in claim 19, wherein the selecting the control further comprises: generating a bounded rectangle around at least a portion of the handwriting stroke made to the screen;comparing an area within the bounded rectangle to an area associated with the control to determine if the area within the bounded rectangle overlaps the area associated with the control; andselecting the control when the area within the bounded rectangle overlaps the area associated with the control.
RELATED APPLICATIONS

This application is a continuation of and claims priority to U.S. patent application Ser. No. 10/976,451, entitled “Systems and Methods for Interacting with a Computer through Handwriting to a Screen”, filed on Oct. 29, 2004, the disclosure of which is incorporated herein by reference in its entirety.

Continuations (1)
Number Date Country
Parent 10976451 Oct 2004 US
Child 13915364 US