Foldable computing devices include devices with two screens joined with a hinge or devices with bendable screens. These types of devices can provide benefits over traditional computing devices such as laptop computers. Commonly, however, these devices implement user interface (“UI”) paradigms originally designed for computing devices with traditional form factors. As a result, UIs provided by foldable computing devices can be cumbersome and error-prone, which can lead to incorrect or inadvertent user input and unnecessary consumption of computing resources.
It is with respect to these and other technical challenges that the disclosure made herein is presented.
Technologies are disclosed herein that enable a foldable computing device having multiple screen regions to perform an inter-region UI operation in response to an intra-region UI gesture. For example, a UI gesture that begins and ends within a first region may be used to move a window from the first region to the second region. The disclosed technologies address the technical problems described above by providing succinct, accurate UI gestures that cause foldable computing devices to perform inter-region UI operations.
The disclosed technologies further address the technical problems described above by combining different types of UI gestures to increase gesture accuracy and expressiveness of inter-region UI operations. Through implementations of the disclosed technologies, UIs can be provided by foldable devices that are easier to utilize and that result in fewer user input errors. Additionally, the utilization of computing resources by foldable computing devices can be reduced by avoiding the processing associated with inefficient navigation of a UI and inadvertent or incorrect user input. Other technical benefits not specifically mentioned herein can also be realized through implementations of the disclosed subject matter.
In one embodiment, a foldable computing device is configured to receive a UI gesture that begins and ends in a first display region. In response, the foldable computing device can perform a UI operation that manifests at least partially in a second display region. Details regarding such a foldable computing device are provided below with regard to
A foldable computing device is also disclosed herein that is configured to receive a combination of different types of UI gestures, and in response, perform an operation that utilizes multiple display regions. For example, a drag and drop gesture may be combined with a flick gesture such that some aspect of the drag and drop gesture modifies the operation caused by the flick gesture. Details regarding such a foldable computing device are provided below with regard to
It should also be appreciated that the above-described subject matter can be implemented as a computer-controlled apparatus, a computer-implemented method, a computing device, or as an article of manufacture such as a computer readable medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.
This Summary is provided to introduce a brief description of some aspects of the disclosed technologies in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The following detailed description is directed to technologies for predictive gesture optimizations for moving objects across display boundaries and compound symbolic and manipulation gesture language for multi-screen windowing. As discussed briefly above, implementations of the disclosed technologies can enable UIs to be provided that are easier to utilize and that result in fewer user input errors. Consequently, the utilization of computing resources can be reduced by avoiding the processing associated with inefficient navigation of a UI and inadvertent or incorrect user input, as compared to previous solutions. Other technical benefits not specifically mentioned herein can also be realized through implementations of the disclosed subject matter.
Those skilled in the art will recognize that the subject matter disclosed herein can be implemented with various types of computing systems and modules, at least some of which are described in detail below. Those skilled in the art will also appreciate that the subject matter described herein can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, computing or processing systems embedded in devices (such as wearables, automobiles, home automation etc.), computing or processing systems embedded in devices (such as wearable computing devices, automobiles, home automation etc.), and the like.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and which are shown by way of illustration specific configurations or examples. Referring now to the drawings, in which like numerals represent like elements throughout the several FIGS., aspects of various technologies for providing inter-region UI operations in response to intra-region UI gestures and for combining different types of UI gestures to perform inter-region operations will be described.
Prior to discussing particular aspects of the disclosed technologies, a brief introduction to foldable computing devices (which might be referred to herein as “foldable devices”) will be provided. As discussed briefly above, foldable devices include multiple screen form factor devices (which might be referred to herein as “hinged devices”) that have two physical display screens joined together with a hinge or other equivalent mechanism. By manipulating the orientation of the display screens with respect to one another by way of the hinge, such devices can be configured in a multitude of postures, some of which are described in greater detail below with regard to
Foldable devices also include computing devices having a bendable display screen (which might be referred to herein as “bendable devices”), such as computing devices utilizing flexible screen technology. When such a device is not bent, it presents a single display surface. When bended, these devices present a single display surface with a crease in the middle. Bendable devices can also be configured in a multitude of postures by varying the amount of bend, some of which are also described in greater detail below with reference to
The display screens of foldable computing devices can be touch sensitive, thereby enabling such devices to recognize touch or stylus input, presses, swipes, and other types of gestures, some of which are described below. These devices can also, of course, be used while being held in various orientations, some of which are described below with regard to
Referring now to
As shown in
In
As also shown in
Referring now to
In the example posture shown in
When the bendable device 202 is bent, a crease or “fold” 204 is formed in the display 104C. The term “fold” as used herein might refer to the area where a foldable device is folded (i.e. the area of a hinge 108 on a hinged device 102 or the area where the display of a bendable device 202 bends).
As in the case of a hinged device 102, the bendable device 202 can also provide one or more display regions. However, in the case of a bendable device 202, the number of available display regions can vary based upon the posture of the device. For instance, a single display region 106C is provided when the bendable device 202 is in a flat state as shown in
Predictive Gesture Optimizations for Moving Objects Across Display Boundaries
Referring now to
Prior to discussing
For example, a drag and drop gesture 330 that spans two regions may have an ambiguous meaning after the input device (e.g. finger 116) leaves region 106B but before it enters region 106A. This is because hinge 108 may not be touch-sensitive, and so foldable device 301 may not receive data indicating gesture 330 will continue into region 106A, e.g. to move a window from region 106B into region 106A, or if gesture 330 has ended at the edge of region 106B, e.g. to dock a window to the edge of region 106B. To deal with this ambiguity, foldable device 301 may pause to determine if a second gesture in region 106A is received and determine if it is a continuation of the first gesture. This pause reduces responsiveness of the user interface when there is no second gesture, causing a poor user experience.
Another challenge to inter-region gestures as depicted in
In some configurations, a symbolic gesture may begin on a UI item, e.g. based on a touch, press, hover-over, etc. of the UI item. UI items may include windows, dialog boxes, icons, menus, application content, or the like. In other embodiments, a context-free symbolic gesture may begin on a desktop background, independent of a UI item. In some configurations, in response to identifying a symbolic gesture, foldable device 301 may perform an operation with the UI item and/or a target associated with the gesture. For example, an operation associated with a symbolic gesture may move a window (the UI item) to an open folder (the target associated with the gesture). In some embodiments, foldable device 301 triggers a command associated with a symbolic gesture as soon as foldable device 301 recognizes the gesture beyond a defined confidence threshold. As such, foldable device 301 may not display command-specific real-time feedback during the gesture, as the meaning of the gesture is not known until the gesture is recognized, at which point the command is performed.
In contrast with a symbolic gesture, during a manipulation gesture, foldable device 301 provides real-time feedback as the gesture progresses. For example, during a manipulation gesture, foldable device 301 may display an underlying UI item moving across region 106B in a one-to-one manner with pointer 302. For instance, a drag-and-drop operation may be performed with a manipulation gesture, during which the UI item being dragged moves across the display region in sync with the manipulation gesture. Another example of a manipulation gesture is a scrolling gesture—e.g. moving document content up or down in sync with the manipulation gesture.
Symbolic gestures may have defined shapes, including flick gestures, tap gestures, timed gestures, circle gestures, angle gestures, etc. As discussed above, a flick gesture is a gesture that moves generally in one direction, allowing for some curvature as a margin of error. Tap gestures may include a press, touch, hover-over, or other activation of a UI item that lasts less than a defined period of time. Timed gestures may include a press, touch, hover-over, or other activation gesture that is held for at least a defined period of time before being released. In some configurations, multiple timed gestures may be defined for a UI item based on how long the activation gesture is held. Circle gestures may define a radius, a direction, and a degree of completion (e.g. 270 degrees). Angle gestures may define a distance before the angle, the size of the angle, and a direction of the angle.
As a gesture is first detected, foldable device 301 may distinguish a symbolic gesture from a manipulation gesture based on initial gesture speed and/or acceleration. For example, foldable device 301 may identify a gesture as symbolic if, at the start of the gesture, pointer 302 moves faster than a defined speed or accelerates faster than a defined rate for a defined amount of time.
Once foldable device 301 determines that the gesture is a symbolic gesture, foldable device 301 may create a list of potential symbolic gestures the user could be performing. For example, if the gesture started at a file icon with a particular extension, the foldable device 301 may create a list of potential symbolic gestures supported by file icons with that particular extension, e.g. move the file, open the file with a first application, open the file with a second application, open the file on a different region, etc.
The list of potential symbolic gestures may be further limited by the direction of the gesture, which region 106 the gesture began in, the orientation and/or posture of foldable device 301, which application or application type contained the UI item, the location of the initial touch relative to the edges of the region, the location of the initial touch relative to hinge 108, the existence (or lack thereof) of other UI items on or near the path of the gesture or in the direction of the gesture, or a combination thereof.
Foldable device 301 may then determine which of the possible symbolic gestures, if any, is performed. Foldable device 301 may determine which symbolic gesture is performed using thresholds—lines that, when crossed by pointer 302, determine that a corresponding symbolic gesture has been performed. Foldable device 301 may create a threshold for each gesture in the set of possible gestures.
Thresholds may have different shapes, e.g. straight, curved, or square, etc., and may be placed in different directions and at different distances from the gesture starting location. The specific shapes, directions, and distances may be determined in part based on a user configuration or a default value associated with each gesture. Shapes, directions, and distances of thresholds may also be dynamically configured based on the presence of other thresholds—e.g. to disambiguate or prioritize one threshold over another.
Shapes, directions, and distances of thresholds may also be dynamically configured based on UI items along the gesture or in the direction defined by the gesture. For example, if hinge 108 is 150 pixels away from a file icon, foldable device 301 may place a threshold triggering a flick gesture 80 pixels from the file icon in the direction of hinge 108.
Once pointer 302 is lifted, or the gesture is otherwise determined to be completed, e.g. due to a lack of motion, foldable device 301 may determine which thresholds, if any, the completed gesture crossed. If the completed gesture didn't cross any thresholds, then foldable device 301 may determine that no symbolic gesture defined by crossing a threshold was performed. If one threshold was crossed, the corresponding symbolic gesture is identified. If more than one threshold was crossed, foldable device 301 may select a symbolic gesture corresponding to the last threshold to be crossed. In other embodiments, each symbolic gesture may have a defined priority, and foldable device 301 may select the symbolic gesture with the highest priority. In other embodiments, foldable device 301 may select the symbolic gesture associated with the threshold that is furthest from the starting location of the gesture.
Threshold 410 may be dynamically updated based on the current location, speed, and acceleration of pointer 302. If the speed or acceleration of pointer 302 is particularly fast, e.g. compared to a baseline for this user, then threshold 410 may be moved closer to location 408 as the intent to perform a symbolic gesture is clearer. However, if the speed or acceleration of pointer 302 is borderline and/or slowing down, threshold 410 may be moved away from location 408 so as to avoid a false positive symbolic gesture when the user actually intended to perform a manipulation gesture.
In contrast to a manipulation gesture,
In some configurations, whether or not foldable device 301 provides feedback during the symbolic gesture, foldable device 301 may provide feedback as soon as the threshold 410 is crossed and the command associated with the gesture is determined, but before the gesture is completed. For example, pointer 302 may cross threshold 410 on the way to completing symbolic gesture 414 at location 412. In response to crossing threshold 410, foldable device 301 may determine that a command 416 to move window 406 is intended. Foldable device 301 may give an indication that threshold 410 was crossed, e.g. with a haptic, audio, or visual indication that threshold 410 was crossed. For example, if threshold 410 is visible, foldable device 301 may flash, highlight, or otherwise emphasize threshold 410 as pointer 302 crosses it. Whether or not threshold 410 was visible, foldable device 301 may provide visual, audio, or haptic feedback when threshold 410 is crossed, e.g. by bouncing, shaking, changing size, or otherwise altering the appearance of window 406. Additionally, or alternatively, foldable device 301 may indicate with visual, audio, or haptic feedback which command will be performed on window 406. In the example illustrated in
As discussed above, flick gesture 414 is considered to have been completed if threshold 410 was crossed within a defined amount of time, and/or if threshold 410 was crossed without traveling more than a defined distance. Similarly, in order to be considered complete, foldable device 301 may require that flick gesture 414 is completed within a defined amount of time after crossing threshold 410 and without deviating more than a defined number of degrees from the direction of flick gesture 414 after crossing threshold 410. By requiring that flick gesture 414 be released quickly and without changing direction significantly, foldable device 301 has greater confidence that flick gesture 414 is the intended gesture, i.e. that flick gesture 414 is not actually the beginning of a manipulation gesture or a different symbolic gesture with a more distant threshold.
While pointer 302 is dragging file icon 504 along path 506, foldable device 301 may iteratively scan regions 106B and 106A (including region 106A itself) for destinations and/or potential operations consistent with path 506 and the current location of file icon 504. Similar to identifying a symbolic gesture as discussed above in conjunction with
As illustrated, path 506 has turned up towards hinge 108, and as such region 106A has been identified as a potential destination. Accordingly, foldable device 301 has created zone 510A as a location that file icon 504 may be dragged to in order to move file icon 504 to region 106A. The location, shape, and orientation of zone 510 may be determined based on a number of factors, including the speed and acceleration of the gesture, the direction the gesture is proceeding in—particularly when the direction is towards a potential target such as a hinge 108 or file folder, and a measure of linearity of the gesture—i.e. is there a clear dominant direction, as opposed to moving in a wavy or zig-zag fashion. Zone 510 may take on any shape, such as an oval, rectangle, or an irregular shape. Zone 510 may take on an irregular shape when other potential destinations would make an operation target ambiguous. For example, an otherwise rectangular zone 510 may have a circle cut out of one corner to prevent overlap between zone 510 and a file folder.
Foldable device 301 may also be configured to perform a more complicated operation. For example,
However, the presence of file folder 612 as a possible destination of file icon 604 creates an ambiguity as to the intended destination of file icon 604. As such, foldable device 301 will create zone 614, beyond file folder 602 to remove any ambiguity. In another embodiment, zone 614 may be created closer to location 602 with a carve-out to avoid file folder 612. If manipulation gesture 606 continues to zone 614 where file icon 604 is dropped, foldable device 301 may perform one or more of the operations depicted in
Anticipated gesture path 704 may be determined based on a direction of file icon 604, i.e. a tangential direction vector at the time anticipated gesture path 704 is generated. Anticipated gesture path 704 may also be determined based on an overall direction of manipulation gesture 606.
Visual effects are not necessarily shaped like the underlying UI items. For example, a visual effect indicating a predicted destination for file icon 604 may have the shape of a rectangle, not an outline of fie icon 604.
Furthermore, visual effects may change shape, location, and size to reflect greater or lesser confidence in a prediction. For example, the size of visual effect 802 may increase as gesture 606 approaches open folder 702, indicating an increasing confidence in the prediction.
Visual effects may also be used to convey that foldable device has interpreted a user intent, but that the particular target is invalid. For example, if open folder 702 was read-only, visual effect 802 may appear in open folder 702, but ‘bounce’, ‘shake’, or otherwise indicate through animation or other appearance that the moving file icon 604 to open folder 702 is invalid.
In some configurations, foldable device 301 produces a visual pulse if, while performing a gesture, pointer 302 hovers over a particular manipulation zone
In some configurations, prediction 902 is a visual effect indicating what would happen to file icon 604 if manipulation gesture 606 ended in zone 714. In some configurations, prediction 902 outlines what an application used to open the file associated with file icon 604 would look like. For example, prediction 902 may include a location and a size of the application. In other embodiments, foldable device 301 may open the file associated with file icon 604 and display actual file contents in prediction 902.
In some configurations, foldable device 301 may indicate how the amount of confidence in the prediction 902 changes throughout the manipulation gesture. For example, prediction 902 may be progressively faded in, e.g. an outline made darker or a predicted rendering made opaquer, as foldable device 301 becomes more confident in the intention of the user. For example, once file icon 604 has been dragged from location 602 to location 1002, prediction 902 may be displayed as a light outline indicating a slight confidence that the user intends to drag icon 604 to zone 714. As file icon 604 is dragged further towards location 1004, the outline may be made darker, as confidence in zone 714 as a destination increases. If the file icon 604 were to be dragged away from zone 714, confidence in zone 714 as a destination decreases, and prediction 902 may be faded out accordingly.
Location 1002 is an example of a location where foldable device 301 may determine with slight confidence that file icon 604 will be dragged to zone 714, and as a result, display prediction 902 with a light outline. This determination may be based on the velocity and acceleration of pointer 302, the distance of location 1002 from other potential destinations like file folder 702, the distance that pointer 302 has traveled from location 602 to location 1002, the change in direction of the gesture over time, whether the gesture is moving towards a different display region, and the like.
Under different circumstances, foldable device 301 may have different levels of confidence at location 1002 that the gesture will end in zone 714. For example, if the average velocity of the gesture from location 602 to location 1002 was lower, then confidence in zone 714 as the destination may be lower, as the user may be indecisive, or the gesture may still turn left towards file folder 702. In this scenario, prediction 902 may not be displayed at all. Conversely, if the average velocity of the gesture was higher, foldable device 301 may have more confidence that the gesture will bypass file folder 702, and prediction 902 may be displayed with a darker outline or more opaque predicted rendering.
The distance the gesture has traveled so far is another factor. If the gesture had not traveled as great a distance—i.e. if location 602 was closer to location 1002—then foldable device 301 may have less confidence in the overall direction of the gesture. If there is less confidence in the direction of the gesture, then foldable device 301 may have less confidence that zone 714 is the destination.
Foldable device 301 may have greater confidence that the gesture will end in location 714 when it has less confidence that the gesture will end in file folder 702. Foldable device 301 may calculate that the gesture will not end in file folder 702 based on the factors listed above for predicting that the gesture will end in zone 714. For example, the distance from location 602 to location 1002, the consistency of the direction of the gesture, whether and to what extent the direction of the gesture is moving towards file folder 702, whether the gesture has meaningfully slowed more than a defined amount while approaching file folder 702, etc.
In some configurations, foldable device 301 may display multiple predictions of where the gesture will end. For example, foldable device 301 may detect two possible destinations for file icon 604 with enough confidence to display a prediction—file folder 702 and zone 714. In this case, foldable device 301 may display prediction 802 within file folder 702 and prediction 902 within region 106A. As the gesture continues, confidence in one or more of the predictions may increase, decrease, or stay the same, causing the predictions to be emphasized, de-emphasized, or remain unchanged, accordingly.
In some configurations, confidence in the prediction 902 is interpreted based on how directly manipulation gesture arrived at zone 714. Directly arriving at zone 714 may engender more confidence in the prediction than if manipulation gesture 606 wended around in multiple directions before encountering zone 714. Another basis for a confidence score as to prediction 902 is how far into zone 714 pointer 302 has traveled—just grazing the surface of zone 714 may provide less confidence than entering the center of zone 714. Another basis for a confidence score is whether the speed and acceleration of pointer 302 suggests that pointer 302 may slow down and remain in zone 714 (increase confidence) or whether the speed and acceleration of pointer 302 may speed up and/or pass through zone 714 (decrease confidence).
The routine 1200 then proceeds to operation 1206, where the foldable device 301 creates and dynamically updates zone 510 in display region 106A as gesture 518 progresses, in the manner described above. The routine 1200 then proceeds to operation 1208, where the foldable device 301 determines if gesture 518 is complete, in the manner described above. If foldable device 301 determines that gesture 518 is not complete, the routine 1200 proceeds to operation 1206. If foldable device 301 determines that gesture 518 is complete, the routine 1200 proceeds to operation 1210 where foldable device 301 determines a location 512 where gesture 518 was completed, in the manner described above.
From operation 1210, the routine 1200 proceeds to operation 1212 where foldable device 301 determines if gesture 518 was completed in zone 510, in the manner described above. If foldable device 301 determines that gesture 518 was not completed in zone 510, then the routine 1200 proceeds to operation 1216, where foldable device 301 performs an operation on UI item 504 in the first display region 106A, in the manner described above. However, if foldable device 301 determines that gesture 518 was completed in zone 510 of region 106A, the routine 1200 proceeds to operation 1214, where foldable device 301 performs an operation associated with gesture 518 on the UI item 504 in display region 106B, in the manner described above.
The routine 1200 then proceeds to operation 1218, where it ends.
Compound Symbolic and Manipulation Gesture Language for Multi-Screen Windowing
In another embodiment, a drag and drop manipulation gesture may be punctuated with a flick up, maximizing the window in the region where the flick occurred, not necessarily where the window was located before the manipulation gesture. In this way, a context created by the manipulation gesture is applied as a modifier when interpreting the symbolic gesture. Other examples of symbolic gestures include flicking up to open a new instance of an application. Opening a new instance of an application may also be applied to an icon on a desktop or to an icon in a taskbar that represents an already running instance of an application.
From operation 1504, the routine 1500 proceeds to operation 1506, where the foldable device 301 identifies a transition from the manipulation gesture component 1406 to a symbolic gesture component 1410, in the manner described above. The routine 1500 then proceeds to operation 1508, where the foldable device 301 determines a location on region 106A or 106B at which the manipulation gesture 1406 transitioned to the symbolic gesture component 1410, in the manner described above.
From operation 1508, the routine 1500 proceeds to operation 1510, where the foldable device 301 performs an operation associated with symbolic gesture component 1410 on the UI item 1404. The operation selected may be modified by the manipulation gesture component 1406, e.g. the location of the transition from the manipulation gesture component 1406 to the symbolic gesture component 1410 may affect the command applied to UI item 1404 by the symbolic gesture component 1410. For instance, if manipulation gesture component 1406 began in region 106A and transitioned to symbolic gesture component 1410 in region 106B, the effect of the symbolic gesture component 1410 may be different than if the transition to symbolic gesture component 1410 occurred in region 106A. For instance, the region 106 in which the transition occurred may determine which region a window is maximized within. The routine 1500 then proceeds to operation 1512, where it ends.
The computer 1600 illustrated in
The mass storage device 1612 is connected to the CPU 1602 through a mass storage controller (not shown) connected to the bus 1610. The mass storage device 1612 and its associated computer readable media provide non-volatile storage for the computer 1600. Although the description of computer readable media contained herein refers to a mass storage device, such as a hard disk, CD-ROM drive, DVD-ROM drive, or USB storage key, it should be appreciated by those skilled in the art that computer readable media can be any available computer storage media or communication media that can be accessed by the computer 1600.
Communication media includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner so as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
By way of example, and not limitation, computer storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. For example, computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and which can be accessed by the computer 1600. For purposes of the claims, the phrase “computer storage medium,” and variations thereof, does not include waves or signals per se or communication media.
According to various configurations, the computer 1600 can operate in a networked environment using logical connections to remote computers through a network such as the network 1620. The computer 1600 can connect to the network 1620 through a network interface unit 1616 connected to the bus 1610. It should be appreciated that the network interface unit 1616 can also be utilized to connect to other types of networks and remote computer systems. The computer 1600 can also include an input/output controller 1618 for receiving and processing input from a number of other devices, including a keyboard, mouse, touch input, a digital pen, or a physical sensor such as cameras and biometric sensors.
The computer 1600 can also be configured with a suitable video output device that can provide output to one or more display screens, such as those described above. One or more of the displays can be a touch-sensitive display that is configured to detect the presence and location of a touch. Such a display can be a resistive touchscreen, a capacitive touchscreen, a surface acoustic wave touchscreen, an infrared touchscreen, an optical imaging touchscreen, a dispersive signal touchscreen, an acoustic pulse recognition touchscreen, or can utilize any other touchscreen technology. In some configurations, the touchscreen is incorporated on top of a display as a transparent layer to enable a user to use one or more touches to interact with objects or other information presented on the display.
A touch-sensitive display can be configured to detect discrete touches, single touch gestures, and/or multi-touch gestures. These are collectively referred to herein as “gestures” for convenience. Several gestures will now be described. It should be understood that these gestures are illustrative and are not intended to limit the scope of the appended claims.
In some configurations, the computer 1600 supports a tap gesture in which a user taps a display once. A double tap gesture in which a user taps a display twice can also be supported. The double tap gesture can be used for various reasons including, but not limited to, zooming in or zooming out in stages. In some configurations, the computer 1600 supports a tap and hold gesture in which a user taps and maintains contact for at least a pre-defined time. The tap and hold gesture can be used for various reasons including, but not limited to, opening a context-specific menu.
In some configurations, the computer 1600 supports a pan gesture in which a user places a finger on a display and maintains contact with display while moving their finger. The pan gesture can be used for various reasons including, but not limited to, moving through screens, images, or menus at a controlled rate. Multiple finger pan gestures are also contemplated.
In some configurations, the computer 1600 supports a flick gesture in which a user swipes a finger in the direction the user wants the screen to move. The flick gesture can be used for various reasons including, but not limited to, scrolling horizontally or vertically through menus or pages. In some configurations, the computer 1600 supports a pinch and stretch gesture in which a user makes a pinching motion with two fingers (e.g., thumb and forefinger) or moves the two fingers apart. The pinch and stretch gesture can be used for various reasons including, but not limited to, zooming gradually in or out of a website, map, or picture.
Although the gestures described above have been presented with reference to the use of one or more fingers for performing the gestures, other appendages such as digital pens can be used to interact with the computing device 1600. As such, the above gestures should be understood as being illustrative and should not be construed as being limiting in any way.
It should be appreciated that the software components described herein, when loaded into the CPU 1602 and executed, can transform the CPU 1602 and the overall computer 1600 from a general-purpose computing device into a special-purpose computing device customized to facilitate the functionality presented herein. The CPU 1602 can be constructed from any number of transistors or other discrete circuit elements, which can individually or collectively assume any number of states. More specifically, the CPU 1602 can operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions can transform the CPU 1602 by specifying how the CPU 1602 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the CPU 1602.
Encoding the software modules presented herein can also transform the physical structure of the computer readable media presented herein. The specific transformation of physical structure depends on various factors, in different implementations of this description. Examples of such factors include, but are not limited to, the technology used to implement the computer readable media, whether the computer readable media is characterized as primary or secondary storage, and the like. For example, if the computer readable media is implemented as semiconductor-based memory, the software disclosed herein can be encoded on the computer readable media by transforming the physical state of the semiconductor memory. For instance, the software can transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software can also transform the physical state of such components in order to store data thereupon.
As another example, the computer readable media disclosed herein can be implemented using magnetic or optical technology. In such implementations, the software presented herein can transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations can include altering the magnetic characteristics of particular locations within given magnetic media. These transformations can also include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.
In light of the above, it should be appreciated that many types of physical transformations take place in the computer 1600 in order to store and execute the software components presented herein. It also should be appreciated that the architecture shown in
It should be appreciated that the computing architecture shown in
The disclosure presented herein also encompasses the subject matter set forth in the following clauses:
Example 1: A computer-implemented method performed by a foldable computing device, comprising: identifying a beginning of a user interface gesture in a first display region of the foldable computing device, wherein the user interface gesture is associated with a user interface item, and wherein the foldable computing device comprises a second display region; detecting an end of the user interface gesture in the first display region of the foldable computing device; and in response to the user interface gesture beginning and ending in the first display region, performing an operation associated with the user interface item and the second display region.
Example 2: The computer-implemented method of Example 1, further comprising: defining a gesture target zone within the first display region; if the user interface gesture ends within the gesture target zone, performing the operation; and if the user interface gesture ends in the first display region outside of the gesture target zone, performing a different operation associated with the first display region or no operation at all.
Example 3: The computer-implemented method of Example 2, further comprising: during the gesture, when the gesture is within the gesture target zone, displaying a visual effect in the second display region that visualizes the operation that would be performed if the gesture ended.
Example 4: The computer-implemented method of Example 2, further comprising: dynamically adjusting a location, size, orientation, or shape of the gesture target zone during the user interface gesture based on a location, direction, or speed of the user interface gesture.
Example 5: The computer-implemented method of Example 4, wherein the gesture target zone is moved closer to the beginning of the user interface gesture when the speed of the user interface gesture is determined to exceed a defined threshold.
Example 6: The computer-implemented method of Example 2, wherein a location, size, or shape of the gesture target zone is determined based in part on a location of the beginning of the user interface gesture and a location of the second display region.
Example 7: The computer-implemented method of Example 2, wherein the operation is performed in response to the user interface gesture entering the gesture target zone and before the user interface gesture had ended.
Example 8: A foldable computing device, comprising: one or more processors; and at least one non-transitory computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by the one or more processors, cause the foldable computing device to: identify a beginning of a user interface gesture in a first display region of the foldable computing device, wherein the user interface gesture is associated with a user interface item, and wherein the foldable computing device comprises a second display region; after the gesture has started, determine a threshold on the first display region based on a beginning location of the user interface gesture and a location of the second display region; determine that the user interface gesture crossed the threshold; detect an end of the user interface gesture within the first display region; and in response to determining that the user interface gesture crossed the threshold and ended in the first display region, perform an operation associated with the user interface item and the second display region.
Example 9: The foldable computing device of Example 8, wherein the operation associated with the second display region is performed in response to the user interface gesture being performed in less than a defined amount of time.
Example 10: The foldable computing device of Example 8, wherein the foldable computing device has a posture based on an orientation of the first display region to the second display region, and wherein the operation associated with the second display region is selected in part based on the posture.
Example 11: The foldable computing device of Example 8, wherein a size, shape, or position of the threshold is dynamically updated during the user interface gesture based on at least one of a location, direction, or speed of the user interface gesture or a historical location, direction, or speed of the user interface gesture.
Example 12: The foldable computing device of Example 8, wherein the threshold is made visible on the first display region when the user interface gesture has traversed towards the threshold at least a defined percentage of a distance between the beginning of the user interface gesture and the threshold.
Example 13: The foldable computing device of Example 8, further comprising: determining whether the second display region comprises a user interface gesture target; and setting a location of the threshold closer to the beginning of the user interface gesture when the second display region comprises the user interface gesture target.
Example 14: The foldable computing device of Example 8, further comprising: identifying a plurality of gesture targets in the second display region; defining a threshold in the first display region for each of the plurality of gesture targets; and selecting the operation based on which of the plurality of thresholds the user interface gesture crosses.
Example 15: The foldable computing device of Example 14, wherein the user interface gesture is pointed towards two or more of the plurality of gesture targets, wherein one of the two or more of the plurality of gesture targets is closer to the user interface gesture and one of the two or more of the plurality of gesture targets is further from the user interface gesture, wherein the closer gesture target is selected when a velocity of the gesture falls below a defined threshold, and wherein the further gesture target is selected when the velocity of the gesture exceeds the defined threshold.
Example 16: A non-transitory computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by a foldable computing device, cause the foldable computing device to: identify a beginning of a user interface gesture in a first display region of the foldable computing device, wherein the user interface gesture is associated with a user interface item, and wherein the foldable computing device comprises a second display region; determine a gesture target zone on the first display region based on a beginning location of the user interface gesture and a location of the second display region; determine that the user interface gesture ended within the gesture target zone; and in response to determining that the user interface gesture ended within the gesture target zone, perform an operation associated with the user interface item and the second display region.
Example 17: The non-transitory computer-readable storage medium of Example 16, wherein the operation is performed only if the user interface gesture ends within a defined period of time after entering the gesture target zone and only if the user interface gesture ends after entering the gesture target zone without changing direction beyond a threshold angle.
Example 18: The non-transitory computer-readable storage medium of Example 16, further comprising: identifying potential gesture targets on the first display region; in response to the gesture coming within a defined distance of one or more of the potential gesture targets on the first display region, moving the gesture target zone closer to the second display region.
Example 19: The non-transitory computer-readable storage medium of Example 16, wherein the user interface gesture comprises a compound gesture that begins as a manipulation gesture that moves the user interface item and ends as a symbolic gesture that does not move the user interface item, and wherein the symbolic gesture determines or modifies which of a plurality of operations associated with the second display region are performed.
Example 20: The non-transitory computer-readable storage medium of Example 19, wherein the manipulation gesture moves the user interface item to the second display region before the symbolic gesture is performed.
Based on the foregoing, it should be appreciated that technologies for predictive gesture optimizations for moving objects across display boundaries have been disclosed herein. Although the subject matter presented herein has been described in language specific to computer structural features, methodological and transformative acts, specific computing machinery, and computer readable media, it is to be understood that the subject matter set forth in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the claimed subject matter.
The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes can be made to the subject matter described herein without following the example configurations and applications illustrated and described, and without departing from the scope of the present disclosure, which is set forth in the following claims.
This application claims priority to U.S. Provisional Patent Application No. 62/909,209, entitled “PREDICTIVE GESTURE OPTIMIZATIONS FOR MOVING OBJECTS ACROSS DISPLAY BOUNDARIES,” which was filed Oct. 1, 2019, and which is expressly incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62909209 | Oct 2019 | US |