Users of smart phones, tablets, and other touch devices are familiar with touching the screen of the device to cause the device to perform an action. The touch action generally simulates a mouse click or button press. Conventionally, touch-sensitive screens have also supported gestures where one or two fingers were placed on the touch-sensitive screen then moved in an identifiable pattern. For example, users may interact with an input/output interface on the touch-sensitive screen using gestures like a swipe, a pinch, a spread, a tap or double tap, or other gestures. Conventionally, the touch-sensitive screen had a single touch point, or a pair of touch points for gestures like a pinch.
Devices like smart phones and tablets may also be configured with screens that are hover-sensitive. Hover-sensitive screens may rely on proximity detectors to detect objects that are within a certain distance of the screen. Conventional hover-sensitive screens detected single objects in a hover-space associated with the hover-sensitive device and responded to events like a hover-space entry event or a hover-space exit event. Conventional hover-sensitive devices typically attempted to implement actions that were familiar to users of touch-sensitive devices. When presented with two or more objects in a hover-space, a hover-sensitive device may have identified the first entry as being the hover point and may have ignored other items in the hover-space.
Some devices may have screens that are both touch-sensitive and hover-sensitive. Conventionally, devices with screens that are both touch-sensitive and hover-sensitive may have responded to touch events or to hover events. While a rich set of interactions may be possible using a screen in a touch mode or a hover mode, this binary approach may have limited the richness of the experience possible for an interface that is both touch-sensitive and hover-sensitive. Some conventional devices may have responded to gestures that started with a touch event and then proceeded to a hover event. Limiting interactions to require an initiating touch may have needlessly limited the user experience. Some devices with screens that are both touch-sensitive and hover-sensitive may have interacted with a single touch point or a single hover point. Limiting interactions to a single touch or hover point may have limited the richness of the experience possible to users of devices. Some conventional devices may have responded to hover gestures that were tied to an object displayed on the screen. For example, hovering over a displayed control may have accessed the control. The control may then have been manipulated using a gesture (e.g., swipe up, swipe down). Limiting hover interactions to only operate on objects or controls that are displayed on a screen may needlessly limit the user experience.
This Summary is provided to introduce, in a simplified form, a selection of concepts that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Example methods and apparatus are directed towards interacting with a hover-sensitive device using gestures that include multiple hover points. A multiple hover point gesture may rely on a sequence or combination of gestures to produce a different user interaction with a screen that has hover-sensitivity. The multiple hover point gestures may include a hover gather, a hover spread, a crank or knob gesture, a poof or explode gesture, a slingshot gesture, or other gesture. By identifying, characterizing, and tracking multiple hover points using the hover capability provided by an interface that is hover-sensitive, example methods and apparatus provide new gestures that may be intuitive for users and that may increase productivity or facilitate new interactions with applications (e.g., games, email, video editing) running on a device with the interface.
Some embodiments may include logics that detect, characterize, and track multiple hover points. Some embodiments may include logics that identify elements of the multiple hover point gestures from the detection, characterization, and tracking data. Some embodiments may maintain a state machine and user interface in response to detecting the elements of the multiple hover point gestures. Detecting elements of the multiple hover point gestures may involve receiving events from the user interface. For example, events like a hover enter event, a hover exit event, a hover approach event, a hover retreat event, a hover point move event, or other events may be detected as a user positions and moves their fingers or other objects in a hover-space associated with a device. Some embodiments may also produce gesture events that can be handled or otherwise processed by other devices or processes.
The accompanying drawings illustrate various example apparatus, methods, and other embodiments described herein. It will be appreciated that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. In some examples, one element may be designed as multiple elements or multiple elements may be designed as one element. In some examples, an element shown as an internal component of another element may be implemented as an external component and vice versa. Furthermore, elements may not be drawn to scale.
Example apparatus and methods concern multiple hover point gesture interactions with a device. The device may have an interface that is hover-sensitive.
The device 100 may include a proximity detector that detects when an object (e.g., digit, pencil, stylus with capacitive tip) is close to but not touching the i/o interface 110. Hover user interactions may be performed in the hover-space 150 without touching the device 100. The proximity detector may identify the location (x, y, z) of an object (e.g., finger) 160 in the three-dimensional hover-space 150, where x and y are parallel to the proximity detector and z is perpendicular to the proximity detector. The proximity detector may also identify other attributes of the object 160 including, for example, how close the object 160 is to the i/o interface (e.g., z distance), the speed with which the object 160 is moving in the hover-space 150, the orientation (e.g., pitch, roll, yaw) of the object 160 with respect to the device 100 or hover-space 150, the direction in which the object 160 is moving with respect to the hover-space 150 or device 100 (e.g., approaching, retreating), a gesture (e.g., gather, spread) made by the object 160, or other attributes of the object 160. While conventional interfaces may have handled a single object, the proximity detector may detect more than one object in the hover-space 150. For example, object 160 and object 170 may be simultaneously detected, characterized, tracked, and considered together as performing a multiple hover point gesture.
In different examples, the proximity detector may use active or passive systems. For example, the proximity detector may use sensing technologies including, but not limited to, capacitive, electric field, inductive, Hall effect, Reed effect, Eddy current, magneto resistive, optical shadow, optical visual light, optical infrared (IR), optical color recognition, ultrasonic, acoustic emission, radar, heat, sonar, conductive, and resistive technologies. Active systems may include, among other systems, infrared or ultrasonic systems. Passive systems may include, among other systems, capacitive or optical shadow systems. In one embodiment, when the proximity detector uses capacitive technology, the detector may include a set of capacitive sensing nodes to detect a capacitance change in the hover-space 150. The capacitance change may be caused, for example, by a digit(s) (e.g., finger, thumb) or other object(s) (e.g., pen, capacitive stylus) that comes within the detection range of the capacitive sensing nodes. In another embodiment, when the proximity detector uses infrared light, the proximity detector may transmit infrared light and detect reflections of that light from an object within the detection range (e.g., in the hover-space 150) of the infrared sensors. Similarly, when the proximity detector uses ultrasonic sound, the proximity detector may transmit a sound into the hover-space 150 and then measure the echoes of the sounds. In another embodiment, when the proximity detector uses a photo-detector, the proximity detector may track changes in light intensity. Increases in intensity may reveal the removal of an object from the hover-space 150 while decreases in intensity may reveal the entry of an object into the hover-space 150.
In general, a proximity detector includes a set of proximity sensors that generate a set of sensing fields in the hover-space 150 associated with the i/o interface 110. The proximity detector generates a signal when an object is detected in the hover-space 150. In one embodiment, a single sensing field may be employed. In other embodiments, two or more sensing fields may be employed. In one embodiment, a single technology may be used to detect or characterize the object 160 in the hover-space 150. In another embodiment, a combination of two or more technologies may be used to detect or characterize the object 160 in the hover-space 150.
In one embodiment, characterizing the object includes receiving a signal from a detection system (e.g., proximity detector) provided by the device. The detection system may be an active detection system (e.g., infrared, ultrasonic), a passive detection system (e.g., capacitive), or a combination of systems. The detection system may be incorporated into the device or provided by the device.
Characterizing the object may also include other actions. For example, characterizing the object may include determining that an object (e.g., digit, stylus) has entered the hover-space or has left the hover-space. Characterizing the object may also include identifying the presence of an object at a pre-determined location in the hover-space. The pre-determined location may be relative to the i/o interface.
When at least one of the multiple hover points that were characterized moves, example apparatus and methods may track the movement of the hover point. The tracking may involve relating characterizations that are performed at different times. When at least one of the multiple hover points that were characterized has been tracked, then the track state 230 may be achieved. Once multiple hover points have been detected, characterized, and tracked, it may be possible to select a multiple hover point gesture based, at least in part, on the size, shape, movement, and relative movement of the hover points. For example, multiple hover points that move inwards towards each other may describe a gather gesture while multiple hover points that move outwards from each other may describe a spread gesture. Multiple hover points that rotate about a central point may describe a crank or knob gesture. When the identification, characterization, and tracking data match a gesture pattern, then the select state 240 may be achieved.
Once the select state 240 has been achieved, actions that preceded the selection or actions that follow the selection may be evaluated to determine what control to exercise during the control state 250. During the control state 250, the multiple hover point gesture may cause the apparatus to be controlled (e.g., turn on, turn off, increase volume, decrease volume, increase intensity, decrease intensity), may cause an application being run on the device to be controlled (e.g., start application, stop application, pause application), may cause an object displayed on the device to be controlled (e.g., moved, rotated, size increased, size decreased), or may cause other actions.
Unlike a conventional touch screen pinch gesture where only two points are brought together, example gather gestures may be extended to include a three, four, five, or more point gather gesture. Thus, rather than simply bringing two points together along a single connecting line, example multiple hover point gather gestures may gather together items in a virtual area or volume, rather than collapsing points along a line. Thus, rather than simply pinching a single item represented in a flat space on a display, a multiple hover point gather may grab multiple objects represented in a three dimensional display. Additionally, rather than manipulating an object in just one dimension (e.g., linearly decrease size of object pinched), example apparatus and methods may manipulate an object in three dimensions. For example, a sphere or other three dimensional volume (e.g., apple) that is manipulated by a multiple hover point gather gesture may shrink spherically, rather than just linearly. In one embodiment, the multiple hover point gather gesture may simply bring two points together in an x/y plane along a single connecting line. Example apparatus and methods may perform the gather gesture without requiring interaction with a touch screen, without requiring interaction with a camera-based system, and without reference to any particular object displayed on device 300. Note that device 300 is not displaying any objects. The gather gesture may be used with respect to objects, but may also be used to control things other than individual objects displayed on device 300. Thus, example apparatus and methods may operate more independently than conventional systems that require touches, cameras, or interactions with specific objects.
A conventional one dimensional spread may only enlarge a selected object in a single dimension, while an example multiple hover point spread operating in three dimensions may enlarge objects in multiple dimensions. The spread gesture may also be used in other applications like gaming control (e.g., spreading magic dust), arts and crafts (e.g., throwing paint in modem art), industrial control (e.g., spraying a virtual mist onto a control surface), engineering (e.g., computer aided drafting), and other applications. Unlike conventional touch spread gestures that operate to change a single dimension of a single selected item, example apparatus and methods may operate on a set of objects in an area or volume without first identifying or referencing those objects. Instead, a multiple hover point spread gesture may be used to generate a spread control event for which an object, user interface, application, portion of a device, or device may subsequently be selected for control. While users may be familiar with the touch spread gesture to enlarge objects, a hover spread may be performed to control other actions. Note that device 300 is not displaying any objects. This illustrates that the spread may be used to exercise other, non-object centric control. For example, the multiple hover point spread gesture may be used to control broadcast power, social circle size for a notification or post, volume, intensity, or other non-object.
Hover technology is used to detect an object in a hover-space. “Hover technology” and “hover-sensitive” refer to sensing an object spaced away from (e.g., not touching) yet in close proximity to a display in an electronic device. “Close proximity” may mean, for example, beyond 1 mm but within 1 cm, beyond 0.1 mm but within 10 cm, or other combinations of ranges. Being in close proximity includes being within a range where a proximity detector can detect and characterize an object in the hover-space. The device may be, for example, a phone, a tablet computer, a computer, or other device. Hover technology may depend on a proximity detector(s) associated with the device that is hover-sensitive. Example apparatus may include the proximity detector(s).
Example apparatus and methods may identify objects located in the hover-space bounded by i/o interface 500 and line 520. Example apparatus and methods may also identify gestures performed in the hover-space. For example, at a first time T1, an object 510 may be detectable in the hover-space and an object 512 may not be detectable in the hover-space. At a second time T2, object 512 may have entered the hover-space and may actually come closer to the i/o interface 500 than object 510. At a third time T3, object 510 may retreat from i/o interface 500. When an object enters or exits the hover-space an event may be generated. Example apparatus and methods may interact with events at this granular level (e.g., hover enter, hover exit, hover move) or may interact with events at a higher granularity (e.g., hover gather, hover spread). Generating an event may include, for example, making a function call, producing an interrupt, updating a value in a computer memory, updating a value in a register, sending a message to a service, sending a signal, or other action that identifies that an action has occurred. Generating an event may also include providing descriptive data about the event. For example, a location where the event occurred, a title of the event, and an object involved in the object may be identified.
In computing, an event is an action or occurrence detected by a program that may be handled by the program. Typically, events are handled synchronously with the program flow. When handled synchronously, the program may have a dedicated place where events are handled. Events may be handled in, for example, an event loop. Typical sources of events include users pressing keys, touching an interface, performing a gesture, or taking another user interface action. Another source of events is a hardware device such as a timer. A program may trigger its own custom set of events. A computer program that changes its behavior in response to events is said to be event-driven.
Region 490 also illustrates an object 440. Object 440 may be a graphic, icon, or other representation of an item displayed by i/o interface 400. Since object 440 has been bracketed by the hover points produced by object 410 and object 412, object 440 may be a target for a multi hover point gesture. The appearance of object 440 may be manipulated to indicate that object 440 is the target of a gesture. If the distance between the hover point associated with circle 430 and the object 440 and the distance between the hover point associated with circle 432 and the object 440 are within gesture thresholds, then the user interface or gesture state may be changed to indicate that a certain gesture (e.g., hover gather) is in progress. While a conventional pinch may operate only on a single object 440 and may require an object to be disposed between touch points, example apparatus and methods are not so limited and may produce a control gather event regardless of whether an object is disposed between the hover points 430 and 432. This type of non-object gather may be used to control an attribute of an apparatus (e.g., reduce transmit power, enter airplane mode) rather than shrinking an object displayed on interface 400.
Not only are the hover points associated with the objects 410, 412, 414, and 416 converging towards a focal point of an ellipse described by the three points, but the points are also retreating from the interface 400. Unlike a conventional system that could only collapse two points together along a line, the three point gather gesture may collect items in an area. Unlike the conventional system that could only operate on one plane, the three point gather gesture may “lift” objects in the z direction at the same time the objects in the ellipse are gathered together. Consider an application that displays photos. A user may wish to collect a set of photos together and place them in a folder. Conventionally, a user may have to select all the photos and then perform a separate action to move the photos. Using the multiple hover point gather gesture with a retreating action, the user may collect the photos and place them in another location in a single gesture. This may reduce memory requirements for a user interface, reduce processing requirements for moving a collection of items, and reduce the time required to perform this action.
In one embodiment, the z distance of hover points associated with a crank gesture may also be considered. For example, a cranking gesture that is approaching the i/o interface 400 may produce a first control while a cranking gesture that is retreating from the i/o interface 400 may produce a second, different control. For example, in a game where a user is spinning a dreidel, teetotum, or other spinning top, the object being spun may drill down into the surface or may helicopter away from the surface based, at least in part, on whether the crank gesture was approaching or retreating from the i/o interface 400. In one embodiment, the crank gesture may be part of a ratchet gesture. For example, after cranking to the right at a first speed that exceeds a speed threshold, a user may return their fingers to the left at a second slower speed that does not exceed the speed threshold. The user may then repeat cranking to the right at the first faster speed and returning to the left at the second slower speed. In this gesture, not only the movement of the fingers but also the speed at which the fingers move determines the gesture. Like an actual ratchet device (e.g., socket wrench), the ratchet gesture may be used to perform multiple turns on an object with only turns in one direction being applied to the object, the turns in the opposite direction being ignored. In one embodiment, the ratchet gesture may be achieved by varying the speed at which the fingers perform the crank gesture. In another embodiment, the ratchet gesture may be achieved by varying the width of the fingers during the crank. For example, when the fingers are at a first narrower distance (e.g., 1 cm) the crank may be applied to an object while when the fingers are returning at a second wider distance (e.g., 5 cm) the crank may not be applied.
While multiple hover point gestures including a gather, spread, and crank have been described, and while both approaching and retreating variations of these gestures have been described, other multiple hover point gestures are possible. For example, a multiple hover point sling shot gesture may be performed by pinching two fingers together and then moving the pinched fingers away from the initial pinch point to a release point. The displacement in the x, y, or z directions may control the velocity, angle, and direction at which an object that was pulled back in the sling shot may be propelled in a virtual world over which the gesture was performed.
More generally, example apparatus and methods may detect multiple hover points, characterize those multiple hover points, track the hover points, and identify a gesture from the characterization and tracking data. Control may then be exercised based on the gesture that is identified and the movements of the multiple hover points. The control may be based on factors including, but not limited to, the direction(s) in which the hover points move, the rate(s) at which the hover points move, the co-ordination between the multiple hover points, the duration of the gesture, and other factors. In one embodiment, the multiple hover point gestures do not involve a touch, a camera, or any particular item being displayed on an interface with which the gesture is performed.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a memory. These algorithmic descriptions and representations are used by those skilled in the art to convey the substance of their work to others. An algorithm is considered to be a sequence of operations that produce a result. The operations may include creating and manipulating physical quantities that may take the form of electronic values. Creating or manipulating a physical quantity in the form of an electronic value produces a concrete, tangible, useful, real-world result.
It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, and other terms. It should be borne in mind, however, that these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, it is appreciated that throughout the description, terms including processing, computing, and determining, refer to actions and processes of a computer system, logic, processor, or similar electronic device that manipulates and transforms data represented as physical quantities (e.g., electronic values).
Example methods may be better appreciated with reference to flow diagrams. For simplicity, the illustrated methodologies are shown and described as a series of blocks. However, the methodologies may not be limited by the order of the blocks because, in some embodiments, the blocks may occur in different orders than shown and described. Moreover, fewer than all the illustrated blocks may be required to implement an example methodology. Blocks may be combined or separated into multiple components. Furthermore, additional or alternative methodologies can employ additional, not illustrated blocks.
Different objects may have different positions, sizes, and movements. Therefore, method 1500 may also include, at 1520, producing independent characterization data for members of the plurality of hover points. In one embodiment, the characterization data for a member of the plurality of hover points describes an (x, y, z) position in the hover-space. Position is one attribute of an object in the hover space. Size is another attribute of an object. Therefore, in one embodiment, the characterization data may also include an x length measurement of the object and a y length measurement of the object. Gestures involve motion. However, a gesture may not involve constant motion. For example, in a sling shot gesture, the pinch and pull portion may be separated from a release portion by a pause while a user lines up their shot. Thus, in one embodiment, the characterization data may also include an amount of time the member has been at the x position, an amount of time the member has been at the y position, and an amount of time the member has been at the z position. If the time exceeds a threshold, then a gesture may not be detected. Some gestures are defined as involving just fingers, a single finger and a single thumb, or other combinations of digits, stylus, or other object. Therefore, in one embodiment, the characterization data may also include data describing the likelihood that the member is a finger, data describing the likelihood that the member is a thumb, or data describing the likelihood that the member is a portion of a hand other than a finger or thumb.
In one embodiment, the characterization data is produced without using a camera or a touch sensor. Additionally, the characterization data may be produced without reference to an object displayed on the apparatus. Thus, unlike conventional systems where a user touches an object on a screen and then performs a hover gesture on the selected item, method 1500 may proceed without a touch on the screen and without relying on any particular item being displayed on the screen. This facilitates, for example, controlling volume or brightness without having to consume display space with a volume control or brightness control.
A gesture involves motion. Therefore, method 1500 may also include, at 1530, producing independent tracking data for members of the plurality of hover points. The tracking data facilitates determining whether the objects, and thus the hover points associated with the objects have moved in identifiable correlated patterns associated with a specific multiple hover point gesture.
In one embodiment, the tracking data for a member of the plurality of hover points describes an (x, y, z) position in the hover-space for the member. The tracking data is not only concerned with where an object is located, but also with where the hover point has been, how quickly the hover point is moving, and how long the hover point has been moving. Thus, in one embodiment, the tracking data may include a measurement of how much the hover point has moved in the x, y, or z direction, and a rate at which the hover point is moving in the x, y, or z direction. The tracking data may also include a measurement of how long the hover point has been moving in the x direction, the y direction, or the z direction. The rate at which a hover point is moving may be used to allow the gesture to operate in four dimensions (e.g., x, y, z, time). For example, a crank gesture may be used to turn an object, or, more generally, to exert rotational control. The amount of time for which the rotational control will be exercised may be a function of the rate at which the hover points move during the gesture.
Conventional systems may have tracked single hover points for simple gestures. Example methods and apparatus may track multiple hover points for more complicated gestures. The more complicated gestures involve coordinated movement by two or more objects. Thus, the tracking data for a hover point may describe a degree of correlation between how the hover point has been moving and how other hover points have been moving. For example, the tracking data may store information that a first hover point has moved linearly a certain amount and in a certain direction during a time window. The tracking data may also store information that a second hover point has moved linearly a certain amount and in a certain direction during the time window. The tracking data may also store information that the first and second hover point have moved a similar distance in a similar direction in the time window. Or the tracking data may store information that the first and second hover point have moved a similar distance in opposite directions in the time window.
Like the hover points are detected without using a camera or touch sensor, the tracking data may be produced without using a camera or a touch sensor. Unlike conventional systems that are designed to only manipulate objects that are displayed on a device, the tracking data may be produced without reference to an object displayed on the apparatus. Thus, the tracking data may be used to identify multiple hover point gestures that will control the apparatus as a whole, a subsystem of the apparatus, or a process running on the apparatus, rather than just an object displayed on the apparatus.
Method 1500 may also include, at 1540, identifying a multiple hover point gesture based, at least in part, on the characterization data and the tracking data. A multiple hover point gesture like a crank involves the coordinated movement of, for example, two fingers and a thumb. The movements may be simultaneous rotational motion around an axis. In different embodiments, the multiple hover point gesture may be a gather gesture, a spread gesture, a crank gesture, a roll gesture, a ratchet gesture, a poof gesture, or a sling shot gesture. Other gestures may be identified. The identification may involve determining that a threshold number of objects have moved in identifiable related paths within a threshold period of time. For example, for the gather gesture, two, three, or more objects may have to move towards a gather point along substantially linear paths that would intersect. For the spread gesture, two, three, or more objects may have to move outwards from a distribution point along substantially linear paths would not intersect. For a poof gesture, two coordinated spread gestures may need to be performed by two separate sets of hover points. For example, a user may need to perform a spread gesture with both the right hand and the left hand, at the same time, and at a sufficient rate, to generate the poof gesture.
While
In one example, a method may be implemented as computer executable instructions. Thus, in one example, a computer-readable storage medium may store computer executable instructions that if executed by a machine (e.g., computer) cause the machine to perform methods described or claimed herein including methods 1500 or 1600. While executable instructions associated with the listed methods are described as being stored on a computer-readable storage medium, it is to be appreciated that executable instructions associated with other example methods described or claimed herein may also be stored on a computer-readable storage medium. In different embodiments, the example methods described herein may be triggered in different ways. In one embodiment, a method may be triggered manually by a user. In another example, a method may be triggered automatically.
The proximity detector 1760 may detect an object 1780 in a hover-space 1770 associated with the apparatus 1700. The proximity detector 1760 may also detect another object 1790 in the hover-space 1770. In one embodiment, the proximity detector 1760 may detect, characterize, and track multiple objects in the hover-space simultaneously. The hover-space 1770 may be, for example, a three dimensional volume disposed in proximity to the i/o interface 1750 and in an area accessible to the proximity detector 1760. The hover-space 1770 has finite bounds. Therefore the proximity detector 1760 may not detect an object 1799 that is positioned outside the hover-space 1770. A user may place a digit in the hover-space 1770, may place multiple digits in the hover-space 1770, may place their hand in the hover-space 1770, may place an object (e.g., stylus) in the hover-space, may make a gesture in the hover-space 1770, may remove a digit from the hover-space 1770, or take other actions. The entry of an object into hover-space 1770 may produce a hover-enter event. The exit of an object from hover-space 1770 may produce a hover-exit event. The movement of an object in hover-space 1770 may produce a hover-move event. Example methods and apparatus may interact with (e.g., handle) these hover events.
Apparatus 1700 may include a hover-sensitive input/output interface 1750. The hover-sensitive input/output interface 1750 may be configured to produce a hover event associated with an object in a hover-space associated with the hover-sensitive input/output interface 1750. The hover event may be, for example, a hover enter event that identifies that an object has entered the hover space and describes the position, size, trajectory or other information associated with the object.
Apparatus 1700 may include a first logic 1732 that is configured to handle the hover event. The hover event may be detected in response to a signal provided by the hover-sensitive input/output interface 1750, in response to an interrupt generated by the input/output interface 1750, in response to data written to a memory, register, or other location by the input/output interface 1750, or in other ways. Thus, handling the hover event involves automatically detecting a change in a physical item.
In one embodiment, the first logic 1732 handles the hover event by generating data for the object that caused the hover event. The data may include, for example, position data, path data, and tracking data. In one embodiment, the position data may be (x, y, z) coordinate data for the object that caused the hover event. In one embodiment, the position data may be angle and distance data that relates the object to a reference point associated with the device. In one embodiment, the position data may include relationships between objects in the hover space.
The tracking data may describe where the object that produced the hover point has been. In one embodiment, the tracking data may include a linked list or other organized collection of points at which the object that produced the hover event has been located. In one embodiment, the tracking data may include a function that describes the trajectory taken by the object that produced the hover event. The function may be described using, for example, plane geometry, solid geometry, spherical geometry, or other models. In one embodiment, the tracking data may include a reference to other tracks taken by other objects in the hover space. The path data may describe where the object that produced the hover point is likely headed. In one embodiment, the path data may include a set of projected points that the hover point may visit based, at least in part, on where the hover point is, where the hover point has been, and the rate at which the hover point is moving. In one embodiment, the path data may include a function that describes the trajectory likely to be taken by the object that produced the hover event. The function may be described using, for example, plane geometry, solid geometry, spherical geometry, or other models.
Apparatus 1700 may include a second logic 1734 that is configured to detect a multiple hover point gesture. A multiple hover point gesture involves at least two hover points, where at least one of the hover points moves. Since apparatus 1700 is using an event driven approach, the second logic 1734 may detect the multiple hover point gesture based, at least in part, on hover events generated by objects in the hover-space. For example, a set of hover enter events followed by a series of hover move events that produce data that describe related paths and tracks within a threshold period of time may yield a multiple hover point gesture detection. The event driven approach differs from conventional camera based approaches that perform image processing. The event driven approach also differs from conventional systems that perform constant control detecting or tracking. Rather than busy waiting for motion or wasting resources on an object that is not moving, the event driven approach may conserve resources by responding to motion.
In one embodiment, the second logic 1734 detects a multiple hover point gesture by correlating movements between the two or more objects. In one embodiment, the movements are correlated as a function of analyzing the position data, the path data, or the tracking data. A user may be using two different fingers to perform two different functions on a device. For example, a user may be using their right index finger to scroll through a list and may be using their left index finger to control a zoom factor. Although the two fingers may both be producing events, the events are unrelated. A multiple hover point gesture involves coordinated action by two or more objects (e.g., fingers). Thus, the second logic 1734 may identify movements that happen within a gesture time window and then determine whether the movements are related. For example, the second logic 1734 may determine whether the objects are moving on intersecting paths, whether the objects are moving on diverging paths that would intersect if traveled in the opposite direction, whether the objects are moving in a curved path around a common axis or region, or other relationship. When relationships are discovered, the second logic 1734 may detect the multiple hover point gesture.
Apparatus 1700 may include a third logic 1736 that is configured to generate a control event associated with the multiple hover point gesture. The control event may describe, for example, the gesture that was performed. Thus, the control event may be, for example, a gather event, a spread event, a crank event, a roll event, a ratchet event, a poof event, or a slingshot event. Generating the control event may include, for example, writing a value to a memory or register, producing a voltage in a line, generating an interrupt, making a procedure call through a remote procedure call portal, or other action. The control event may be applied to the apparatus 1700 as a whole, to a portion of the apparatus 1700, or to another device being managed or controlled by apparatus 1700. Thus, the control event may be configured to control the apparatus, a radio associated with the apparatus, a social media circle associated with a user of the apparatus, a transmitter associated with the apparatus, a receiver associated with the apparatus, or a process being performed by the apparatus. By way of illustration, a spread gesture may be used to control the breadth of the social circle to which a text message is to be sent. A fast wide spread gesture may send the text to the public while a slow narrow spread gesture may only send the text message to close friends.
Unlike conventional systems that rely on touches or cameras, the first logic 1732, the second logic 1734, and the third logic 1736 may operate without referencing touch sensor data and without referencing camera data.
Apparatus 1700 may include a memory 1720. Memory 1720 can include non-removable memory or removable memory. Non-removable memory may include random access memory (RAM), read only memory (ROM), flash memory, a hard disk, or other memory storage technologies. Removable memory may include flash memory, or other memory storage technologies, such as “smart cards.” Memory 1720 may be configured to store user interface state information, characterization data, object data, data about the item, data about a multiple hover point gesture, data about a hover event, data about a gesture event, data associated with a state machine, or other data.
Apparatus 1700 may include a processor 1710. Processor 1710 may be, for example, a signal processor, a microprocessor, an application specific integrated circuit (ASIC), or other control and processing logic circuitry for performing tasks including signal coding, data processing, input/output processing, power control, or other functions. Processor 1710 may be configured to interact with logics 1730 that handle multiple hover point gestures.
In one embodiment, the apparatus 1700 may be a general purpose computer that has been transformed into a special purpose computer through the inclusion of the set of logics 1730. The set of logics 1730 may be configured to perform input and output. Apparatus 1700 may interact with other apparatus, processes, and services through, for example, a computer network.
Mobile device 2000 can include a controller or processor 2010 (e.g., signal processor, microprocessor, application specific integrated circuit (ASIC), or other control and processing logic circuitry) for performing tasks including signal coding, data processing, input/output processing, power control, or other functions. An operating system 2012 can control the allocation and usage of the components 2002 and support application programs 2014. The application programs 2014 can include mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), gesture handling applications, or other computing applications.
Mobile device 2000 can include memory 2020. Memory 2020 can include non-removable memory 2022 or removable memory 2024. The non-removable memory 2022 can include random access memory (RAM), read only memory (ROM), flash memory, a hard disk, or other memory storage technologies. The removable memory 2024 can include flash memory or a Subscriber Identity Module (SIM) card, which is known in GSM communication systems, or other memory storage technologies, such as “smart cards.” The memory 2020 can be used for storing data or code for running the operating system 2012 and the applications 2014. Example data can include hover point data, user interface element state, web pages, text, images, sound files, video data, or other data sets to be sent to or received from one or more network servers or other devices via one or more wired or wireless networks. The memory 2020 can store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). The identifiers can be transmitted to a network server to identify users or equipment.
The mobile device 2000 can support one or more input devices 2030 including, but not limited to, a touchscreen 2032, a hover screen 2033, a microphone 2034, a camera 2036, a physical keyboard 2038, or trackball 2040. The mobile device 2000 may also support output devices 2050 including, but not limited to, a speaker 2052 and a display 2054. Other possible input devices (not shown) include accelerometers (e.g., one dimensional, two dimensional, three dimensional). Other possible output devices (not shown) can include piezoelectric or other haptic output devices. Some devices can serve more than one input/output function. For example, touchscreen 2032 and display 2054 can be combined in a single input/output device.
The input devices 2030 can include a Natural User Interface (NUI). An NUI is an interface technology that enables a user to interact with a device in a “natural” manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls, and others. Examples of NUI methods include those relying on speech recognition, touch and stylus recognition, gesture recognition (both on screen and adjacent to the screen), air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence. Other examples of a NUI include motion gesture detection using accelerometers/gyroscopes, facial recognition, three dimensional (3D) displays, head, eye, and gaze tracking, immersive augmented reality and virtual reality systems, all of which provide a more natural interface, as well as technologies for sensing brain activity using electric field sensing electrodes (electro-encephalogram (EEG) and related methods). Thus, in one specific example, the operating system 2012 or applications 2014 can comprise speech-recognition software as part of a voice user interface that allows a user to operate the device 2000 via voice commands. Further, the device 2000 can include input devices and software that allow for user interaction via a user's spatial gestures, such as detecting and interpreting gestures to provide input to an application. In one embodiment, the multiple hover point gesture may be recognized and handled by, for example, changing the appearance or location of an item displayed on the device 2000.
A wireless modem 2060 can be coupled to an antenna 2091. In some examples, radio frequency (RF) fitters are used and the processor 2010 need not select an antenna configuration for a selected frequency band. The wireless modem 2060 can support two-way communications between the processor 2010 and external devices. The modem 2060 is shown generically and can include a cellular modem for communicating with the mobile communication network 2004 and/or other radio-based modems (e.g., Bluetooth 2064 or Wi-Fi 2062). The wireless modem 2060 may be configured for communication with one or more cellular networks, such as a Global system for mobile communications (GSM) network for data and voice communications within a single cellular network, between cellular networks, or between the mobile device and a public switched telephone network (PSTN). Mobile device 2000 may also communicate locally using, for example, near field communication (NFC) element 2092.
The mobile device 2000 may include at least one input/output port 2080, a power supply 2082, a satellite navigation system receiver 2084, such as a Global Positioning System (GPS) receiver, an accelerometer 2086, or a physical connector 2090, which can be a Universal Serial Bus (USB) port, IEEE 1394 (FireWire) port, RS-232 port, or other port. The illustrated components 2002 are not required or all-inclusive, as other components can be deleted or added.
Mobile device 2000 may include a multiple hover point gesture logic 2099 that is configured to provide a functionality for the mobile device 2000. For example, multiple hover point gesture logic 2099 may provide a client for interacting with a service (e.g., service 1960,
The following includes definitions of selected terms employed herein. The definitions include various examples or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting. Both singular and plural forms of terms may be within the definitions.
References to “one embodiment”, “an embodiment”, “one example”, and “an example” indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase “in one embodiment” does not necessarily refer to the same embodiment, though it may.
“Computer-readable storage medium”, as used herein, refers to a medium that stores instructions or data. “Computer-readable storage medium” does not refer to propagated signals. A computer-readable storage medium may take forms, including, but not limited to, non-volatile media, and volatile media. Non-volatile media may include, for example, optical disks, magnetic disks, tapes, and other media. Volatile media may include, for example, semiconductor memories, dynamic memory, and other media. Common forms of a computer-readable storage medium may include, but are not limited to, a floppy disk, a flexible disk, a hard disk, a magnetic tape, other magnetic medium, an application specific integrated circuit (ASIC), a compact disk (CD), a random access memory (RAM), a read only memory (ROM), a memory chip or card, a memory stick, and other media from which a computer, a processor or other electronic device can read.
“Data store”, as used herein, refers to a physical or logical entity that can store data. A data store may be, for example, a database, a table, a file, a list, a queue, a heap, a memory, a register, and other physical repository. In different examples, a data store may reside in one logical or physical entity or may be distributed between two or more logical or physical entities.
“Logic”, as used herein, includes but is not limited to hardware, firmware, software in execution on a machine, or combinations of each to perform a function(s) or an action(s), or to cause a function or action from another logic, method, or system. Logic may include a software controlled microprocessor, a discrete logic (e.g., ASIC), an analog circuit, a digital circuit, a programmed logic device, a memory device containing instructions, and other physical devices. Logic may include one or more gates, combinations of gates, or other circuit components. Where multiple logical logics are described, it may be possible to incorporate the multiple logical logics into one physical logic. Similarly, where a single logical logic is described, it may be possible to distribute that single logical logic between multiple physical logics.
To the extent that the term “includes” or “including” is employed in the detailed description or the claims, it is intended to be inclusive in a manner similar to the term “comprising” as that term is interpreted when employed as a transitional word in a claim.
To the extent that the term “or” is employed in the detailed description or claims (e.g., A or B) it is intended to mean “A or B or both”. When the Applicant intends to indicate “only A or B but not both” then the term “only A or B but not both” will be employed. Thus, use of the term “or” herein is the inclusive, and not the exclusive use. See, Bryan A. Gamer, A Dictionary of Modem Legal Usage 624 (2d. Ed. 1995).
Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.