The subject matter disclosed herein generally relates to electronic devices. Specifically, the present disclosure addresses a haptic device with a touch gesture interface.
Manual input devices, such as joysticks and mice, are frequently complemented by means for providing tactile sensations such that the manual input devices provide tactile feedback to their users. Contemporary tactile feedback devices generate tactile stimulation through use of moving or vibrating mechanical members. A problem that may affect such devices is that moving or vibrating mechanical members may be bulky, unreliable, or difficult to control.
Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
Example methods and systems (e.g., devices) are directed to devices, such as haptic devices (e.g., a touch input device or tactile feedback device). Examples merely typify possible variations. Unless explicitly stated otherwise, components and functions are optional and may be combined or subdivided, and operations may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.
Example embodiments describe various interaction techniques that allow a user to explore, in terms of tactile feelings, the information content being displayed on a touch screen, or on a screen that is used together with a touch pad, without requiring the user to look at the screen and without requiring auditory or other output. As explained below, the interaction techniques enable adding to the graphical user interface of a computer program, or some information content, such as a web page being displayed in a web browser, or other any suitable application, a mode where the displayed information can be explored through active hand motions that cause a digital information processing system to generate tactile sensations, wherein the generated sensory stimulation depends on the information being displayed on the screen and on the user's actions.
Touch-screen devices and touch-pad devices may interpret various user “gestures” as input elements. For example, the user may place one or more fingers on the touch surface, and then move the finger or fingers in specific patterns. In many devices, a user may perform a so-called “pinch-zoom” gesture by placing two fingers on the screen, and then moving the fingers simultaneously either towards each other or away from each other, causing the image on the screen to be zoomed out or in, respectively. Another example is a “two-finger scroll” gesture in which the user may place two fingers on a touch pad in order to move the information content that is being displayed. Yet another example is a “Select/Copy/Paste” menu that such devices may show if the user places a finger on a text field and lets it stay there stationary without moving for a while (e.g., a so-called “long press” gesture). As a response to this “long press” gesture, a device may show a menu that allows text to be copied between text fields within an application and between applications. However, many touch-screen'devices and touch-pad devices may not able to exert high-precision tactile sensory stimulation as a response to a user touching the device.
Many computer programs that may be executed on touch-screen devices, including many web browsers, are based on an interaction method in which a sliding finger is assumed to move the information “underneath” the touch-screen. That is, when a user places their finger on the screen and moves the finger, the display contents are updated to create an illusion that there is a larger surface “under” the screen, with the screen showing only a portion of that surface, and that the finger “touches” the underlying surface and moves it. For example, when a smart phone is used to display a regular web page, the web page may be rendered into a large image of which only small part of the page can be shown on the screen at a time. Then, when the user moves the finger on the screen, the content of the web page is moved relative to the display so that the user can see a different part of the web page. This methodology of interaction may be referred to as a “grab-and-move” mode in which a user places her finger on the screen and slides the finger, and the information content displayed appears to move together with the finger. This “grab-and-move” method may be considered as analogous to having a sheet of paper on a smooth table. Once a person puts a finger on the paper and then moves the finger, the whole sheet of paper moves together with the finger but the finger's location relative to the sheet of paper does not move.
Additionally, an interaction method may allow a user to “explore” the “tactile feeling” of the information being displayed on a digital display. This methodology can be referred to as a “hold-and-feel” mode, which may also be called an “explore mode.”
In one example embodiment, to enter the “hold-and-feel” mode, a specific “gesture,” such as pressing down one finger on the side of the screen, will “lock” the information content so that the display contents are no longer moved as the user slides another finger over the screen.
This “hold-and-feel” mode may be analogous to having a sheet of paper on a smooth table. Once a person puts a finger firmly somewhere on the sheet of paper (e.g., at a corner or a side of the sheet) and uses another finger to slide on top of the paper, the person may feel with the sliding finger the texture of the paper. For example, if the sheet of paper had Braille writing, a person skilled with Braille may be able to read the Braille writing with the “hold-and-feel” method while the “grab-and-move” method would not reveal the Braille text.
In various example embodiments, the “gesture” includes the user placing his thumb, or other finger, at a specific position or any of a number of specific positions on the screen (e.g., of a haptic device), and keeping that finger stationary, thereby “holding” the information content stationary with respect to the screen. Thereafter the user may move one or more other fingers over the screen, allowing the information content being displayed on the screen to be felt (e.g., via one or more haptic effects).
In a tablet computer implementation, the specific positions may include the left and right sides of the screen (e.g., about 1-2 cm from the screen border). Hence, the user may place his thumb on the left or right side of the screen, and then slide a finger of the other hand over the screen, in response to which the device may provide or exert tactile stimulation onto the finger.
In a handheld (e.g., mobile phone) implementation, this specific “gesture” may be physical button or switch, or a touch-sensitive position or set of positions on the bottom or sides of the device. This may allow the user to turn the switch, press the button, or place one of the fingers on one of the specific positions, and then slide the thumb of the same hand over the screen, as above.
In some example embodiments, the interaction method may involve the body member that is used to “hold” the information content being moved or slid to any one or more of the specific positions in some specific way. For example, the user may need to slide the thumb from the border of the screen to the specific position in order to activate the “hold-and-feel” mode.
The software controlling the device (e.g., a haptic device) may be programmed to make a distinction between the gesture to enter “hold-and-feel” and other two or multi-finger gestures. For example, the software may consider the gesture as a “hold-and-feel” activation gesture if the touch position being reported is the only one currently reported, if there has been no other touch activity for a while before this gesture, if the touch position falls within one or more pre-specified positions (e.g., as one or more sides of the screen), if there is at least a pre-defined delay from the time the first touch position is being reported to the time a second touch position is first reported, or if the second touch position lasts for at least a minimum pre-defined time, various other criteria, or any suitable combination thereof.
In the “hold-and-feel” mode, the device may generate tactile sensory stimulation output in a way to create an illusion of varying texture, although, in some example embodiments, no tactile sensory stimulation is generated. As shown in
The “hold-and-feel” mode can be deactivated when the user performs another gesture, such as removing their finger or thumb from a pre-defined position, which it has been touching while being in the “hold-and-feel” mode. In order to avoid accidental leaving of the “hold-and-feel” mode, the software may ignore a momentary release of the stationary touch position as an unintended gesture, and ignore it.
In an alternate embodiment, the tactile stimulation that the user feels on his finger may be configured to differ from (e.g., fail to coincide with or fail to correspond to) the visual information being displayed on the screen. This may be useful in various contexts. For example, this may allow a user to find “hidden” information in a game. As an example, the tactile stimulation may “reveal” where a treasure is hidden on a map within a game, or indicate which of a number of options is more valuable than the others. This situation is illustrated
In yet another embodiment, while in the “hold-and-feel” mode, the software may interpret additional gestures. For example, if the user raises his second finger and then momentarily “taps” (e.g., touch and raise) with the finger, these actions may be interpreted as the user wanting to activate the interaction element, such as a web link, under the finger. The situation is illustrated in
In general, while in the “hold-and-feel” mode with a stationary finger touching the surface at a pre-defined position (e.g., among multiple pre-defined positions), the software may interpret any two-finger gesture as a plain or modified one-finger gesture, any three-finger gesture as a plain or modified two-finger gesture, etc. For example, while continuing to keep a thumb on the side of a screen, the user may use two fingers of the other hand to zoom and pan the information content.
Such re-interpretation of multi-finger gestures into other multi-finger gestures may be arranged so that the result allows the user to have more precise control over the information content being displayed. For example, the software may interpret a “three-finger-pinch-zoom” (e.g., a stationary thumb with two fingers pinching) as a “slow” zoom, where the zooming effect may be much smaller than in a regular pinch zoom.
If the “hold-and-feel” gesture does not involve keeping a finger or thumb stationary on the touch surface, the software may interpret the regular single and multi-touch gestures in a different way than in the “normal” or “hold-and-grab” mode, thereby allowing the user to have more precise control, as mentioned above.
The gesture used to enter the “hold-and-feel” mode may be some gesture (e.g., a double tap on a screen corner). The gesture used to leave the “hold-and-feel” mode may be some corresponding gesture (e.g., double tapping the same screen corner again, or double tapping a different corner of the screen).
In the “hold-and-feel” mode, the locking feature need not completely lock for the “underlying” screen contents to the user's finger. That is, instead of keeping the information content stationary while in the “hold-and-feel” mode, the information content may be moved “slowly” under the finger with some “inertia,” “slippage,” or “drag.” As an alternative, some specific gesture, such as keeping one finger stationary or two fingers sliding together, may be used to move the information content around while still not leaving the “hold-and-feel” mode.
Example embodiments may be implemented by adding a few new software components to existing touch-input and tactile-sensory-output enabled software. These components may include any one or more of the following: a component used (e.g., configured) to detect, distinguish, and interpret the gesture used to enter the “hold-and-feel” mode; a component used to generate tactile sensory output signals that depend on the information being displayed on the screen and the position of the finger or fingers on the touch input surface; a component used to modify and re-interpret touch events while in the “hold-and-feel” mode; and a component used to detect, distinguish, and interpret a gesture used to leave the “hold-and-feel” mode.
For example, such functions may be implemented by modifying or overriding an on TouchEvent callback method, such as in a Web View class. As another example, in a haptic device that has or executes a web browser, the functions may be implemented by modifying or overriding appropriate callback methods. For example, in browsers based on Objective-C, it may be possible to dynamically override any Objective-C method.
Various example embodiments described herein may provide a set of usability and accessibility enhancements. For example, a haptic device may provide for a more intuitive browsing by enabling the user to feel active elements under the finger without focusing on that spot on the screen. By providing a complementary dimension to the graphical information of the screen, more information can be conveyed to the user. Various example embodiments of a haptic device may be used in conjunction with a variety of different haptic effects or tactile feedback technologies capable of producing forces, vibrations, motions, or any suitable combination thereof, to a body member of a user. Some example embodiments of the haptic device may be embodied in a tactile stimulation apparatus that uses mechanical stimulation. Other example embodiments of the haptic device may be embodied in a tactile stimulation apparatus that generates electrosensory sensation to a body member, the technology of which is explained in more detail below.
Capacitive input devices may use dedicated circuitry to detect changes in the capacitive environment of sensor lines printed on the glass surface of a display screen. For this purpose, the measurement circuit may utilize a good, low impedance ground reference. In a so-called “input-driven” configuration of the Senseg haptic system (e.g., haptic device), the potential of the input device may be pulsed up to several kilovolts against a device (e.g., tablet PC) chassis. In order to do this, both the signal and supply voltage lines of the input device may be isolated from the chassis either inductively, optically, or capacitively. The isolation may break the connection between the input device measurement electronics and the device chassis, which, in other words, may reduce the “ground mass” into a fraction thereof. This may have the effect of reducing the sensitivity of the input device significantly. Hence, it may be helpful to provide a low-impedance ground for the input device.
One way to do this involves using a low-impedance amplifier for driving the high-voltage pulses. However, because of the kilovolt-level, the use of an amplifier may not feasible. Instead, the voltage may be generated using a voltage multiplier, the output impedance of which might not be easily controllable.
The input device may scan the lines at about 200 kHz frequency, which may be significantly higher at the spectrum than the frequency content of the Senseg stimulation pulse train, which might not contain frequencies above, say, a few kHz. Thus, some example embodiments place a capacitor across the isolation, so that the impedance at low frequencies (e.g. haptic feedback) may be relatively high, but the impedance at higher frequencies (e.g., in the input device measurement range) may be sufficiently low to provide a suitable ground reference.
The above-mentioned capacitance works in practice, but might not be an optimal solution in certain circumstances, because the capacitor may increase the load to the high-voltage generator output. This may result in excessive power consumption and may encourage the use of larger and more expensive capacitive and inductive components. Another issue with such a capacitive bypass may be that the ground current through the capacitor at the edges of the pulses may interfere with sensitive electronics parts, like audio amplifiers on the mother board.
Various example embodiments of a haptic device may use an active feedback arrangement to provide, for example, a low-impedance ground for the input device at high frequencies and low amplitudes. For low-frequency, high amplitude pulses, the same circuit may exhibit a relatively high-impedance load and possibly a non-linear load. Within a linear region, this circuit may exhibit frequency-dependent synthetic capacitance, which may provide a much higher difference between high-frequency and low-frequency impedances compared to a simple capacitor.
According to various example embodiments, a system (e.g., a haptic device) may be configured to use the difference in operating frequency bands between the haptic stimulus and input device scanning to adjust the impedance levels (e.g., adequately). In some example embodiments, this means using one passive bypass capacitor. Other example embodiments use the active feedback to modify the grounding impedance in a desired way. Moreover, the circuit may function non-linearly in respect to the amplitude in order to reduce (e.g., further reduce) the loading on the voltage generator.
Furthermore, although the impedance adjustment circuit may be described as an add-on circuit in parallel with the isolator, it may also be seen as an integral part of the voltage generator. Indeed, some example embodiments of the system may entirely bypass the isolation by using a passive capacitor. In such cases, the input device may not work without such a bypass.
Certain example embodiments of the system may use the active circuit to modify the impedance based on the frequency and amplitude. This may have the effect of significantly reducing the capacitive loading for the HV generator, which may make the design more compact and cost-effective, as well as reduce the power consumption.
Accordingly, the functioning of the passive capacitor has been discussed. There may exist a risk that the non-linear behavior of the active circuit could cause intermittent interference to the input device. In various example embodiments, these spurious errors may be filtered out by the input device.
The tactile stimulation apparatus 150 may be in the form of a tactile display device that is capable of displaying graphics as well as creating a sensation of touch to the body member 120.
In addition to displaying graphics, the touch screen panel 160 may create a sensation of touch or pressure to the body member 120. The creation of the touch sensation to the body member 120 may involve the generation of one or more high voltages, which may possibly result in an electrical shock to the body member 120. To possibly prevent or suppress such an electrical shock, a region of the touch screen panel 160 may comprise a semiconducting material that may limit a flow of current to the body member 120. Additionally, the semiconducting material may also be used to reduce the thickness of the touch screen panel 160, as described by way of examples herein. In addition to the smart phone depicted in
The insulation region 252 is an area, section, or portion of the composite section 250 that comprises (e.g., includes or contains) one or more insulation materials. An insulator is a material that does not conduct electricity or is a material having such low conductivity that the flow of electricity through the material is negligible. Examples of insulation materials include glass, polyethylene, wood, rubber-like polymers, polyvinyl chloride, silicone, Teflon, ceramics, and other insulation materials.
The semiconducting region 254 is an area, section, or portion of the composite section 250 that comprises one or more semiconducting materials. A semiconductor is a material that has an electrical conductivity between that of a conductor and an insulator. Accordingly, a semiconducting region 254 is a region that is neither a perfect conductor nor a perfect isolator. The electrical conductivity of the semiconducting region 254 may be generally in the range 103 Siemens/cm to 10−8 S/cm. However, rather than defining the limits of resistance of the semiconducting region 254, it can be useful to present dimensioning guidelines. In one embodiment, the surface resistance of the semiconducting region 254 may be such that the semiconducting region 254 can be charged in a reasonable time to a sufficient voltage for creating an electrosensory sensation (e.g., a sensation of apparent vibration) to the body member 120. In some applications, such a reasonable charging time is less than 500 milliseconds, where, in one example, the charging time varies between 0.1 and 500 milliseconds. It should be appreciated that charging times that are less than 200 milliseconds may provide quick feedback to the user. The surface resistance of the semiconducting region 254 may be a function of its surface area. The larger the surface, the smaller the surface resistance may be, if the charging time is to be kept reasonable. Examples of semiconducting materials include semiconductive transparent polymers, zinc oxides, carbon nanotubes, indium tin oxide (ITO), silicon, germanium, gallium arsenide, silicon carbide, and other semiconducting materials.
Referring to the example embodiment shown in
The voltage amplifier 240 is driven by a signal “IN,” as generated by the voltage source 242, and this signal may result in a substantial portion of the energy content of the resulting Coulomb force to reside in a frequency range to which the Pacinian corpuscles 222 may be sensitive. For humans, this frequency range can be between 10 Hz and 1000 Hz. For example, the frequency can be between 50 Hz and 500 Hz or between 100 Hz and 300 Hz, such as about 240 Hz.
In various example embodiments, the voltage amplifier 240 and the capacitive coupling over the insulation region 252 are dimensioned such that the Pacinian corpuscles 222 or other mechanoreceptors are stimulated and an electrosensory sensation is produced. For this, the voltage amplifier 240, the voltage source 242, or any suitable combination thereof, may generate an output of several hundred volts or even several kilovolts. The alternating current driven into the body member 120 by way of capacitive coupling may have a very small magnitude which may be further reduced by using, for example, a low-frequency alternating current.
It should be appreciated that galvanic grounding sets the human potential close to ground, and creates a strong potential difference and electric field between the composite section 250 and the body member 120. Galvanic grounding may work well if the user is touching properly the conductive ground electrode. However, in examples of a very light touch, only a very small contact area is in use, and local capacitive current may produce a spark or electric shock, which may cause irritation to the body member 120. The semiconducting region 254 may limit the flow of local current thorough a small area and thus to the body member 120. As a result, the limit of the current flow may suppress or prevent electrical shocks to the body member 120, thereby possibly reducing irritation to the body member 120.
Additionally, the semiconducting region 254 may be used to reduce a thickness of the insulation region 252. In particular, a high current density electron channel may be formed when there is an electric breakdown, which is a rapid reduction in the resistance of an insulator that can lead to a spark jumping around or through the insulator (e.g., insulation region 252). However, in some situations, electron channels may be difficult to form in certain semiconducting materials because such materials may have lower charge carrier density. Hence, electric breakdown may be unlikely to occur with the use of semiconducting materials even with the application of a high electric field. As a result, the insulation region 252 may also be decreased, thereby resulting in reduced thickness of the insulation region 252. It should be appreciated that near the lower limit of this voltage range (e.g., several hundred volts to several kilovolts), the insulator thickness may be as thin as one atom layer or, in other examples, may be between about 0.01 mm and about 1 mm, between about 1 μm and about 2 mm, greater than about 2 mm, between about 20 μm and about 50 μm, or less than about 20 μm. As used herein, the term “about” means that the specified dimension or parameter may be varied within an acceptable manufacturing tolerance for a given application. In some embodiments, the acceptable manufacturing tolerance is ±10%. As material technology and nanotechnology develop, even thinner durable insulating sections may become available, and this may also permit a reduction of the voltages used.
It should also be appreciated that the voltage source 242 does not need to be physically coupled to the semiconducting region 254 to be able to charge the semiconducting region 254 to an electric potential. In certain example embodiments, the voltage source 242 may be proximate to the semiconducting region 254, but not physically connected. In particular, the electric field generated by the voltage source 242 may charge the semiconducting region 254 to an electric potential without the voltage source 242 being physically connected to the semiconducting region 254. This capacitive transfer of energy may also be a type of capacitive coupling and referred to as a capacitive connection.
The semiconducting region 254 depicted in
The insulation region 252 comprises a piece of insulation material, such as a sheet of glass. The semiconducting region 254 comprises a different piece of semiconducting material, such as a sheet of a semiconductive transparent polymer. The piece of insulation material that forms the insulation region 252 is physically distinct from the piece of semiconducting material that forms the semiconducting region 254. The composite section 251 is formed from adhering the piece of insulation material together with the piece of semiconducting material.
The insulation region 252 has a side or surface that is touchable by the body member 120 and an opposite side or surface. In this embodiment, a layer of a semiconducting material is spread over this opposite surface of the insulation region 252. This layer of semiconducting material forms the semiconducting region 254. It should be appreciated that the layer of the semiconducting material may be a thin layer. For example, in one embodiment, the layer may be as thin as one atom layer. In other example embodiments, thicknesses of the semiconducting region 254 may be between about 1 μm and about 200 μm, greater than about 200 μm, or between about 20 μm to 50 μm.
However, in this embodiment, the composite section 257 is not formed from two separate pieces of materials. Rather, the insulation region 252 and the semiconducting region 254 initially comprise a single piece of insulation material, and a dopant may be added to a portion of the insulation material to change the material property of the portion to a semiconducting material. Particularly, the addition of the dopant increases the conductivity of the portion of the insulation material to change its material property to that of a semiconducting material. Doping may be by way of oxidation (e.g., p-type doping) or by way of reduction (e.g., n-type doping). This doped portion forms the semiconducting region 254. Examples of such dopants include conductive polymers, which are generally classified as polymers with surface resistivity from 101 to 107 ohms/square. Polyaniline (PANI) is an example of a conductive polymer. Other examples of dopants that may be used include carbon nanotubes, conductive carbons, carbon fibers, stainless steel fibers, gallium arsenide, sodium naphthalide, bromine, iodine, arsenic pentachloride, iron (III) chloride, and nitrosyl (NOPF6).
Vice versa, in an alternate embodiment, the composite section 257 may initially comprise a single piece of semiconducting material, and a dopant may be added to a portion of the semiconducting material to change the portion to an insulation material. In other words, the insulation region 252 and the semiconducting region 254 initially comprise a single piece of semiconducting material, and a dopant may be added to a portion of the semiconducting material to change the material property of the portion to an insulation material. The addition of the dopant decreases the conductivity of the portion of the semiconducting material to change its material property to that of an insulation material. This doped portion forms the insulation region 252.
Although not strictly necessary, it may be possibly beneficial to provide a grounding connection which helps to bring a user closer to a well-defined (e.g., non-floating) potential with respect to the voltage section of the tactile stimulation apparatus 301. In an embodiment, a grounding connection 350 connects a reference point REF of the voltage section to a body member 354, which is different from the body members 320A, 320B, and 320C to be stimulated. The reference point REF is at one end of the secondary winding of the transformer 304, while the drive voltage for the electrodes 306A, 306B, and 306C is obtained from the opposite end of the secondary winding. In an illustrative embodiment, the tactile stimulation apparatus 301 is a hand-held apparatus, which comprises a touch screen panel activated by one or more of the body members 320A, 320B, and 320C. The grounding connection 350 terminates at a grounding electrode 352, which may form a surface of the tactile stimulation apparatus 301.
The grounding connection 350 between the reference point REF and the non-stimulated body member 354 may be electrically complex. In addition, hand-held apparatus typically lack a solid reference potential with respect to the surroundings. Accordingly, the term “grounding connection” does not require a connection to a solid-earth ground. Instead, a grounding connection means any suitable connection which helps to decrease the potential difference between the reference potential of the tactile stimulation apparatus 301 and a second body member (e.g., body member 354) distinct from the one or more body members to be stimulated (e.g., body members 320A, 320B, and 320C). The non-capacitive coupling 350 (e.g., galvanic coupling) between the reference point REF of the voltage section and the non-stimulated body member 354 may enhance the electrosensory sensation experienced by the stimulated body members 320A, 320B, and 320C. Conversely, an equivalent electrosensory stimulus can be achieved with a lower voltage, over a thicker insulator with use of grounding connection 350, or any suitable combination thereof.
As discussed above, the amplifiers 302 and 303 may be driven with a high-frequency signal 312, which may be modulated by a low-frequency signal 314 in the modulator 310. The frequency of the low-frequency signal 314 may be such that the Pacinian corpuscles are responsive to that frequency. According to various example embodiments, the frequency of the high-frequency signal 312 may be slightly above the hearing ability of humans, such as between 18 kHz and 25 kHz, or between 19 kHz and 22 kHz.
The embodiment described in
In this embodiment, the individual electrodes 403 are individually controllable, wherein the controlling of one of the electrodes 403 affects its orientation and/or protrusion. The set of electrodes 404 is oriented, by way of the output signal from the controller 316, such that the set of electrodes 404 collectively form a plane under the insulation region 402. In this example, the voltage current (e.g., DC or AC) from the voltage amplifier 240 to the set of electrodes 404 generates an opposite-signed charge (e.g., a negative charge) of sufficient strength to the body member 120 in close proximity to the composite section. A capacitive coupling between the body member 120 and the tactile stimulation apparatus 400 is formed over the insulation region 402, which may produce an electrosensory sensation on the body member 120.
The charges of individual electrodes 403 may be adjusted and controlled by way of the controller 316. The capacitive coupling between the tactile stimulation apparatus 500 and the body member 120 may give rise to areas having charges with opposite signs 501 (e.g., positive and negative charges). Such opposing charges are mutually attractive to one another. Hence, it is possible that Coulomb forces stimulating the Pacinian corpuscles may be generated not only between the tactile stimulation apparatus 500 and the body member 120, but also between infinitesimal areas within the body member 120 itself.
The electric charges, which are conducted from the voltage amplifier 240 to the electrodes 610a-610i by way of the switch array 317, may all have similar signs or may have different signs, as illustrated above in
The matrix of electrodes 610a-610i and the switch array 317 may provide a spatial variation of the electrosensory sensations. That is, the electrosensory sensation provided to the user may depend on the location of the user's body member (e.g., a finger) proximate to the tactile stimulation apparatus 600 having a touch screen panel with the electrodes 610a-610i. The spatially varying electrosensory sensation may, for example, provide the user with an indication of the layout of the touch-sensitive areas of the touch screen panel. Accordingly, the tactile stimulation apparatus 600 depicted in
This voltage U1 is lower than the drive voltage e from the voltage source 706. The reference potential of the tactile stimulation apparatus 700 may be floating, as will be described in more detail by way of example below, which may further decrease the electric field directed to the body member. Some embodiments aim at keeping the capacitance C1 low in comparison to that of C2. Here, at least capacitance C1 is not significantly higher than C2. Other embodiments aim at adjusting or controlling C2, for instance by coupling the reference potential of the tactile stimulation apparatus 700 back to the user.
Stray capacitances can be controlled by arrangements in which several electrodes are used to generate potential differences among different areas of a composite section. By way of example, this technique may be implemented by arranging a side of a touch screen panel of a hand-held device (e.g., the top side of the device) to a first electric potential, while the opposite side is arranged to a second electric potential, wherein the two different electric potentials can be the positive and negative poles of the hand-held device. Alternatively, a first surface area can be the electric ground (e.g., reference electric potential), while a second surface area is charged to a high electric potential. Moreover, within the constraints imposed by one or more insulator layers, it is possible to form minuscule areas of different electric potentials, such as electric potentials with opposite signs or widely different magnitudes. Furthermore, such areas may be small enough that a body member is simultaneously subjected to the electric fields from several areas of a surface with different potentials.
By measuring the voltage U4, it is possible to detect a change in the value of capacitance C1, the value of capacitance C2, or both. Assuming that the floating voltage source 810 is a secondary winding of a transformer, the change in one or more of the capacitances C1 and C2 may be detected on the primary side as well, for example, as a change in load impedance. Such a change in one or more of the capacitances C1 and C2 may serve as an indication of a touching or approaching body member. In some example embodiments, the tactile stimulation apparatus 800 is arranged to utilize this indication of the touching or approaching body member such that the tactile stimulation apparatus 800 uses a first (e.g., lower) voltage to detect the touching or approaching by the body member and a second (e.g., higher) voltage to provide feedback to the user. For example, such a detection of the touching by the body member using the lower voltage may trigger automatic unlocking of the tactile stimulation apparatus 800 or may activate illumination of a touch screen panel. The feedback using the higher voltage may indicate any one or more of the following: the outline of each touch-sensitive area; a detection of the touching or approaching body member by the tactile stimulation apparatus 800; the significance of (e.g., the act to be initiated by) the touch-sensitive area; or other information processed by the application program and that may be potentially useful to the user.
The controller 6004 may individually drive one or more of the voltage sources 6008 and 6009. For example, the controller 6004 can drive the voltage source 6008 to generate a voltage V1 at a different time phase from voltage V2, which may be generated by voltage source 6009. In another example, the controller 6004 may also drive the voltage source 6008 to generate V1 at a different potential from voltage V2. The difference in potential between V1 and V2 may create a spatial wave on a surface of the semiconducting region 254. For example,
The touch screen panel 902 may include various regions of materials, such as one or more insulation regions, a conductive region, and a semiconducting region. The layout of the regions is described in more detail by way of example elsewhere herein, but the various regions may form two different electrodes. One electrode (e.g., a “touch detection electrode”) may be dedicated to detect touch by the body member 120 while another electrode (e.g., a “electrosensory sensation electrode”) may be dedicated to produce an electrosensory sensation on the body member 120. In some example embodiments, to detect touch, an application of voltage to the touch detection electrode generates an electrostatic field. A touching by the body member 120 changes this electrostatic field, and the location of the body member 120 (e.g., A1, A2, or A3) may be identified by the tactile display device 900 based on these changes.
In addition to processing touch-screen functionalities, the controller 906 may use information of the position of the body member 120 to temporally vary the intensity of the electrosensory sensation produced by the electrosensory sensation electrode on the body member 120. Although the intensity of the electrosensory sensation is varied over time, time is not an independent variable in the present embodiment. Instead, the timing of the temporal variations may be a function of the location of the body member 120 relative to the touch-sensitive areas (e.g., A1, A2 and A3). Accordingly, the tactile display device 900 depicted in
The graph 950 depicted below the touch screen panel 902 illustrates this functionality. As shown in
To facilitate integration of a tactile stimulation apparatus with capacitive devices, such as the tactile display device 900, the region that includes the touch detection electrode or other regions may comprise a semiconducting material, which may separate the tactile stimulation regions from the touch sensitive regions. At the voltage and current levels associated with the touch sensitive regions or functionalities, the semiconducting region may function as an insulator, meaning that the semiconducting region does not hinder the operation of the capacitive device. However, at the voltage, frequency, current levels, or other spatial topologies associated with the tactile stimulation regions or associated functionalities, the semiconducting region may function as a conductor, meaning that the semiconducting region can be used as the electrode by which a current is conducted over the capacitive coupling to the body member 120, as discussed above.
In this embodiment, the insulation region 1002 and the conductive region 1004 may comprise a conventional touch screen panel. The conductive region 1004 forms an electrode (e.g., the “touch electrode” as discussed above) that functions to detect touch of the body member 120, and is different from the electrode described above that produces an electrosensory sensation on the body member 120. This conductive region 1004 may comprise metallic or transparent conductive material. Depending on the conductivity, in one example, a thickness of the conductive region 1004 may be between about 1 μm and about 200 μm. In other examples, a thickness of the conductive region 1004 may be less than about 1 μm or greater than about 200 μm.
The insulation region 1002 disposed above the conductive region 1004 may comprise a transparent insulation material, such as glass. In one example, a thickness of the insulation region 1002 may be between about 10 μm and about 2 mm. In another example, a thickness of the insulation region 1002 may be greater than about 2 mm. In yet another example, a thickness of the insulation region 1002 may be between about 0.4 mm and 0.7 mm.
To suppress electrical shocks to the body member 120 or for other functionalities, the semiconducting region 254 may be included in the touch screen panel 902. This semiconducting region 254 also forms an electrode (e.g., the “electrosensory sensation electrode” as discussed above) that functions to produce an electrosensory sensation. For example, as explained in more detail below, a voltage source (not shown) can charge the semiconducting region 254 to an electric potential to produce an electrosensory sensation on the body member 120. As a result, the embodiment of the touch screen panel 902 is configured to detect touch by the body member 120 as well as generating electrosensory sensation on the body member 120.
Here, the semiconducting region 254 may be disposed above the insulation region 1002 (e.g., on top of a conventional touch screen panel). Another insulation region 252 may be disposed above the semiconducting region 254. For example, a thin layer of semiconducting material, such as a semi-conductive transparent polymer, may be spread over a conventional touch screen panel, which comprises the insulation region 1002 and the conductive region 1004. Another piece of glass, which is an insulation material, may then be disposed above the layer of the semiconducting material.
In an alternative embodiment, the insulation region 1002 may be excluded from the touch screen panel 902. As depicted in
It should be appreciated that the semiconducting region 254 depicted in
The circuitry 2002, in this embodiment, includes a voltage amplifier 302, which is implemented as a current amplifier 303 followed by a voltage transformer 304. The secondary winding of the voltage transformer 304 is in, for example, a flying configuration with respect to the remainder of the tactile display device 2000. The amplifiers 302 and 303 are driven with a modulated signal whose components as inputted in a modulator 310 are denoted by 312 and 314. The output of the voltage amplifier 302 is coupled to a controller 316 and in turn, to the conductive region 1004.
In this embodiment, the semiconducting region 254 is charged by way of capacitive connection. In particular, the conductive region 1004 is charged to float at a high potential, thereby transferring or charging the semiconducting region 254 to an electric potential to create an electrosensory sensation to the body member 120.
The composite section 3004 includes a conductive region 1004, an electronics region 3002 disposed above the conductive region 1004, an insulation region 1002 disposed above the electronics region 3002, a semiconducting region 254 disposed above the insulation region 1002, and another insulation region 252 disposed above the semiconducting region 254. The electronics region 3002 includes various electronics or components of the tactile stimulation apparatus 3000, such as a liquid crystal display, input devices, or other electronics. A surface of the insulation region 252 is configured to be touched by body member 120.
The circuitry 3008, in this embodiment, includes a voltage amplifier 302, which is implemented as a current amplifier 303 followed by a voltage transformer 304. The secondary winding of the voltage transformer 304 is in, for example, a flying configuration with respect to the remainder of the tactile stimulation apparatus 3000. The amplifiers 302 and 303 are driven with a modulated signal whose components as inputted in a modulator 310 are denoted by 312 and 314. The output of the voltage amplifier 302 is coupled to a controller 316 and in turn, to the conductive region 1004. In the depicted embodiment, a grounding connection 350 is included in the tactile stimulation apparatus 3000, and this grounding connection 350 helps to bring a user closer to a well-defined (e.g., non-floating) potential with respect to the voltage section of the tactile stimulation apparatus 3000. The grounding connection 350 connects a reference point REF of the voltage section to a body member 354, which is different from the body member 120 to be stimulated. The reference point REF is at one end of the secondary winding of the transformer 304, while the drive voltage for the composite section 3004, which comprises an electrode, is obtained from the opposite end of the secondary winding. In another embodiment, a resistor (not shown) can be added between the composite section 3004 and the circuitry 3008 or between the composite section 3006 and the circuitry 3008 to cause a phase difference.
In an illustrative embodiment, the tactile stimulation apparatus 3000 is a hand-held apparatus, which comprises a touch screen panel activated by body member 120. The grounding connection 350 terminates at the composite section 3006, which serves as a grounding electrode and can form a surface of the tactile stimulation apparatus 3000. The composite section 3006 can be comprised of different materials. In one embodiment, as depicted in
Particularly, the semiconducting region 254′ may have a surface that is configured to be touched by body member 354. The conductive region 1004′ is connected to a voltage source at the reference point REF. In another embodiment, the composite section 3006 may comprise two semiconducting regions (not shown) and an insulation region (not shown) disposed between the two semiconducting regions. Here, one semiconducting region has a surface that is configured to be touched by the body member 354 while the other semiconducting region is connected to the voltage source at, for example, the reference point REF depicted in
The various embodiments of the composite sections 3006 discussed above may further suppress or prevent electrical shocks to the body member 354 because a semiconducting region of the different composite sections 3006 (e.g., semiconducting region 254′) may possibly limit the amount of current flow. Furthermore, the insulation region 252′ insulates the conductive region 1004′ or another semiconducting region against galvanic contact by the body member 354. The use of the various composite sections 3006 discussed in
In this embodiment, the circuitry 4006 also includes a voltage amplifier 302, which is implemented as a current amplifier 303 followed by a voltage transformer 304. The secondary winding of the voltage transformer 304 is in, for example, a flying configuration with respect to the remainder of the tactile stimulation apparatus 4000. The amplifiers 302 and 303 are driven with a modulated signal whose components 312 and 314 are inputted into a modulator 310. The output of the voltage amplifier 302 is coupled to a controller 316, and unlike the circuitries discussed above, this controller 316 is connected to the grounding connection 350. In this alternative embodiment, the grounding connection 350 connects a reference point REF of the voltage section to a body member 354, which is different from the body member 120 to be stimulated. The reference point REF is at one end of the secondary winding of the transformer 304, while the drive voltage for the composite section 4003, which comprises an electrode, is obtained from the opposite end of the secondary winding, as depicted in
As depicted in
Here, the outermost semiconducting regions 254 or portions of semiconducting regions 254 outside of the grooves are connected to the controller 316, thereby creating a galvanic coupling between the reference point REF and the non-stimulated body member 354. The portions of the semiconducting regions 254′ within the grooves are capacitively coupled to ground (e.g., ground region 4002) behind the insulation region 252. It should be appreciated that the outermost semiconducting regions 254 are also capacitively coupled to ground, but because they are further away from the ground when compared to the semiconducting regions 254′ within the grooves, the capacitive coupling of the semiconducting regions 254′ to ground may be stronger than the capacitive coupling of the semiconducting regions 254 to ground.
The use of the various composite sections 4004 discussed in
In this embodiment, the circuitry 5006 also includes a voltage amplifier 302, which is implemented as a current amplifier 303 followed by a voltage transformer 304 that is in a floating configuration. The secondary winding of the voltage transformer 304 is in, for example, a flying configuration with respect to the remainder of the tactile stimulation apparatus 5000. The amplifiers 302 and 303 are driven with a modulated signal whose components 312 and 314 are inputted into a modulator 310. The output of the voltage amplifier 302 is coupled to a controller 316; which is connected to the grounding connection 350. In this alternative embodiment, the grounding connection 350 connects a reference point REF of the voltage section to a body member 354, which is different from the body member 120 to be stimulated. The reference point REF is at one end of the secondary winding of the transformer 304, while the drive voltage for the composite section 5003, which comprises an electrode, is obtained from the opposite end of the secondary winding, as depicted in
As depicted in
In this embodiment, the semiconducting regions 254 are connected to the controller 316. When the circuitry 5006 applies voltage to composite section 5003, the insulation region 252′ may vibrate because the voltage shrinks the insulation region 252′. Without the voltage, the insulation region 252′ returns to its original shape. When the voltage is pulsating, the shrinkage and expansion cause the insulation region 252′ to vibrate. This vibration of the insulation region 252′ may enhance the sensation of touch, pressure, or vibration from the body member 120 touching the composite section 5003 of the tactile stimulation apparatus 5000. It should be noted that vibration may also be caused by body member 120 having a different polarity. Here, if sufficiently high voltage is applied to the semiconducting region 254, then the person with body members 120 and 354 acts as a ground potential, thereby letting an electromagnetic field generated by the voltage to vibrate the insulation region 252′.
In this example, the voltage source 242 is configured to charge the semiconducting region 254, which functions as an electrode, to an electric potential, thereby producing an electrosensory sensation on the body member 120. The voltage source 242 applies this charge by way of the connector 1102 that physically couples the semiconducting region 254 to the voltage source 242. In this embodiment, the connector 1102 also comprises a semiconducting material, which may suppress or prevent electrical shocks to the body member 120 in the event of a breakdown of both the semiconducting region 254 and the insulation region 252, thereby exposing the connector 1102.
For example, as depicted in
Reference numeral 1248 denotes a presence-detection logic stored within the memory 1206. Execution of the presence-detection logic 1248 by the microprocessor 1204 may cause the detection of the presence or absence of the body member 120 at the predefined area 1246. A visual cue, such as a name of the function or activity associated with the predefined area 1246, may be displayed by the display region 1222, as part of the displayed information 1226, so as to help the user find the predefined area 1246.
Additionally stored within the memory 1206 may be stimulus-variation logic 1268. Input information to the stimulus-variation logic 1268 may include information on the presence or absence of the body member 120 at the predefined area 1246. Based on this presence information, the stimulus-variation logic 1268 may have the effect that the microprocessor 1204 instructs the tactile output controller 1260 to vary the electrical input to the tactile output region 1242, thus varying the electrosensory sensations caused to the body member 120. Thus, a user may detect the presence or absence of the displayed information at the predefined area 1246 merely by way of tactile information (or electrosensory sensation), that is, without requiring visual clues.
Examples of conditions for key selection may include: a time delay between a previous touch (e.g., seek or stay) and a previous tap (e.g., a delay greater than 200 msec), which may allow for traditional tap typing (e.g., press typing) supported by existing virtual keyboards; and
a tap duration limit under 500 msec, which may have the effect of excluding long touches (e.g., seek gestures) from being interpreted as key activations.
According to certain example embodiments, a long press may be supported or implemented by using a separate scheme or configuration for the virtual keyboard. Various example embodiments may support one or more variants of such a separate scheme or configuration.
For example, a system (e.g., haptic device) may support “multi-touch and seek.” In multi-touch input cases, two “anchor” fingers may rest on screen, but only the finger that is moving activates the seek feedback. This behavior may be easily learnable, for example, in cases where moving two or more fingers prevents the seek feedback.
As another example, a system may support “multi-touch key selection.” Key selection by tapping the key activates the key. The finger that taps selects a key below it, and resting fingers do not prevent the tapping (e.g., the detection or recognition of the tapping).
As another example, a single (e.g., first) finger or fingertip may perform the key-location seeking and then stopping on the desired key (e.g., thereby selecting the key for potential activation). Then, another (e.g., second) finger or fingertip may tap (e.g., anywhere on the screen) to trigger (e.g., activate) the key on which the previous (e.g., first) finger stopped.
As another example, the display content (e.g., a webpage in a browser window) may contain textured elements (e.g., an element of the display content that has a feelable texture, such as textured links). The texturing of these elements within the display content may be accomplished using haptic technology discussed elsewhere herein. Accordingly, the user may locate one or more of these textured elements (e.g., links) using seek-mode finger movements. The texturing of these elements may help the user locate one or more elements on the screen, for at least the reason that even small elements can be located based on the tactile sensations in the finger, even when the finger fully or partially covers the element and obscures the element from the user's vision. When the user has located an element that the user would like to select, the user does not have to lift her finger to select the element, as lifting the finger could indicate another seek action (e.g., in seek-mode). Rather, the user may select the element by tapping and lifting with another finger on the screen.
As a further example, a system may support “long press” behavior. According to certain example embodiments, a slightly separate scheme may be used to support “long press” behavior. For instance, a long press (e.g., on a virtual key, such as a space bar) may activate a menu in the form of a slider, a wheel, a list, or any suitable combination or portion thereof. The long press menu may be visually very clearly indicated and may provide haptic feedback that is indicative or characteristic of “long press activation.” The long press menu may display one or more options that correspond to the “long press” behavior. Sliding a finger (e.g., fingertip) to the displayed “long press” options selection may involve or include sliding the finger and releasing it. For instance, the user may slide a finger to the intended “long press” menu location (e.g., with a characteristic “long press seek feel” feedback), and within a “long press” time constant (e.g., 1 sec). The user may release and tap to select a particular option. If “long press” behavior is not wanted, the user may simply keep the finger stable and down inside the “long press” menu area for more than a threshold period of time (e.g., 1 sec). In response, the “long press” menu may disappear and not be available unless again activated (e.g., by another “long press”).
Examples of conditions for “long press” behavior include:
menu activation, in which a stable touch to a virtual key for more than a “menu activation time” (e.g., 1 sec.) may cause a long press menu to appear on the screen (see
In some example embodiments, a user may leave the long press state or context by continuing to keep a finger in the long press menu longer than a long press “menu key stick time” (e.g., 1 sec.). In response, the menu may disappear. An additional “menu disappeared delay” (e.g., 0.5 sec.) after disappearance of a long press menu may be implemented before activation of the original keyboard below the long press menu. This may have the effect of reducing the risk of accidentally tapping a key for limit cases of the system being too slow with long press selection.
According to various example embodiments, one or more of these touch and tap keyboard features may improve usability in one or more virtual keyboards. Many users may dislike touch screens, because they may feel that touches may sometimes trigger accidental keys or controls. Some users may find themselves consciously attempting to “not touch the screen,” lest an accidental key or control be triggered. One or more of the above-mentioned touch and tap keyboard features may have the effect of reducing or eliminating false touches to the screen. Users may learn that, if an accidental touch happens, they may just relax and keep the finger or hand on the screen for a while (e.g., 0.5 to 1 sec). Then, releasing the finger or hand may perform no action.
In example embodiments, a system (e.g., haptic device) supports or provides “context-sensitive haptic browsing” and one or more user interface elements in support thereof. Users of touch-sensitive devices (e.g., tablets or smart phones) may experience difficulties in touching (e.g., tapping on, or sliding to) a particular location on the screen (e.g., a link, a menu item, or an insertion point for editing text). For example, a user may take multiple attempts to make a selection, or raw selections may be common. As another example, different control loads may be accidentally mixed, such as a mode for scrolling the page (e.g., “grab-and-move”) and a mode for selecting a link (e.g., “touch and lift”). This may result in a user being left with a feeling of not being fully in control of the device or that there is some inherent inaccuracy in the device or some malfunction in the device.
Certain example embodiments of the system are capable of texture generation via haptics. For example, a user may be able to slide one or more fingers on the screen and feel the locations of links, menu items, or other elements on the screen. To support this, a user interface being presented (e.g., displayed) on the screen may include one or more annotations for haptic texture generation. For example, the system (e.g., haptic device) may interpret an explicit touch (e.g., on the edge of the screen) as a command to freeze scrolling and enable texture generation based on the one or more annotations, which may thereby enable a haptic feel for the user interface in this context. Some example embodiments of the system implement more intuitive mode switching. For example, using an adapted means for exploring and selecting various page content (e.g., links) and for scrolling the page.
Some user interfaces of computer programs executable on touch-screen devices, including several web browsers, may be designed with a notion that a sliding finger should be assumed to move the information “underneath” it (e.g., drag information from one location to another location on the screen). That is, when a user places a finger on the screen and moves the finger, the contents of the screen are updated in a manner that creates an illusion of a larger surface “under” the screen, with the screen showing only a portion of this larger surface, and that the finger “touches” this larger underlying surface and moves it with respect to the screen. For example, a smart phone may be used to display a regular web page, and the web page may be rendered into a large image of which only small portion can be shown at a time on the screen. Then, when the user moves a finger on the screen, the contents of page are moved relative to the screen, so that the user can bring different parts of the page into view on the screen. This notion of interaction may be referred to as a “grab-and-move” mode, where the information content being displayed appears to move together with the finger (e.g., as if dragged by the finger). The “grab-and-move” mode may be contrasted with a “touch and lift” mode that triggers the link to be activated (e.g., statically).
According to certain example embodiments, a system (e.g., haptic device) is configured to implement “context-sensitive grab” of displayed content (e.g., in instead of the “grab-and-move” mode or as a modification of the “grab-and-move” mode). In such example embodiments, when the user places a finger (e.g., fingertip) near a link or other active element that is displayed on the screen, the user can slide the finger and feel a texture type of sensation that indicates the finger is actually on top of the link or other active element. Depending on the example embodiment involved, the selection of the link or other active element may then be performed by lifting the finger, or by holding the finger still on top of the element and tapping with another finger elsewhere on the screen. In some example embodiments, if the user is moving the finger for a longer distance (e.g., to a region of the content without nearby links or other active elements), the system automatically implements (e.g., switches to or reverts to) a normal “grab-and-move” configuration in which the display content moves together with the finger (e.g., as a background image), and the lift does not select any links or other active elements in the content. This feature may allow a user to explore page content by feeling the page content with small movements (e.g., small back-and-forth movements), while larger motions cause the page content to be grabbed and follow the finger as a scroll gesture. This feature may also provide the benefit of enabling haptic feedback for users, without requiring the users to learn new gestures or finger movements.
The system may implement “context-sensitive grab” by implementing a threshold distance for triggering this mode. For example, the system may be configured so that a finger motion less than 10 millimeters in length does not trigger scrolling (e.g., “grab-and-move”), but longer finger motions do trigger scrolling. Accordingly, the system may implement an exploration mode (e.g., “explore mode” or “hold-and-feel” mode) where the distance from the last (e.g., previous) touch position at which the finger stopped is less than 10 millimeters. This last stop position may be determined as the last touch position detected, or the last touch position that has been stable (e.g., with less than two millimeters of motion in any direction) for one second or longer. Moreover, the system may implement a scrolling mode (e.g., “scroll mode”) when the explore mode condition is not met (e.g., the finger motion is 10 millimeters or greater). The scrolling mode may be exited when the user lifts the finger.
In accordance with various example embodiments, the system may also provide or support a degree of “inertia” in the initial movement. For example, with a finger move of a small distance, the display content (e.g., as a background image) may slide with inertia and reach full lock with the finger after a longer finger movement. The amount of inertia and the threshold distances of finger movement may be teamed according to individual implementations in order to give an optimum or most intuitive user experience.
The system may implement an “inertial page” by implementing a similar threshold distance for triggering this mode. For example, the system may be configured so that a finger motion of less than five millimeters does not trigger scrolling, but longer finger motions do trigger scrolling. Accordingly, the system may implement an exploration mode (e.g., “explore mode” or “hold-and-feel” mode) with the distance from the last touch position at which the finger stopped is less than five millimeters. As noted above, this last stop position may be determined as the last touch position detected, or the last touch position that has been stable (e.g., with less than two millimeters of motion in any direction) for one second or longer. Moreover, the system may implement a scrolling mode (e.g., “scroll mode”) when the explore mode condition is not met (e.g., the finger motion is five millimeters or longer). Upon entering the scrolling mode, the “grab” (e.g., “finger grab”) of the page content may or may not be immediate, and the page content may be presented with inertia. Accordingly, the finger touch may “grab” the page content with “friction” and begin smoothly moving the page content with inertia. As noted above, the scrolling mode may be exited when the user lifts the finger.
In some example embodiments, the “inertial page” may be implemented by setting a virtual mass for the page content (e.g., 100 grams). The system may determine (e.g., calculate) that a finger contact (e.g., a finger touch) is moving this virtual mass through a virtual friction force that depends on the speed of the finger contact (e.g., relative motion between the finger and the moving page content). Accordingly, the virtual force that moves the mass of the page content may be expressed as:
F=sign(v_rel)*F_nom,
where “sign” is the sign of v_rel (e.g., positive or negative), “v_rel” is the relative velocity (e.g., v_finger−v_page), and “F_nom” is a nominal virtual friction force that moves the page mass.
According to various example embodiments, a nominal finger slide speed may be 10 centimeters per second, and the page (e.g., page content) may accelerate to grip (e.g., grab) the finger in about 0.5 seconds. Hence, the acceleration of the page may be 10 cm/s per 0.5 sec=20 centimeters per second squared, and the friction force may be expressed as F=m*a=0.1 kg*0.2 m/ŝ2=20 mN. An average page slip speed between the finger and the page may be 5 cm/s, and a grab slip lag in page motion compared to finger motion may be 5 cm/s & 0.5 s=2.5 cm.
With a faster finger slide speed, the friction force may be larger, and the system may accordingly provide a faster grab experience for fast gestures. For example, the friction force may follow the following behavior:
If v_finger>v_nom, then F=v_finger/v_nom*F_nom,
where “v_nom” is 10 cm/s. Other non-linear behavior may be implemented for the friction force. In some example embodiments, the page may even stick immediately to the finger after the system detects a threshold (e.g., maximum) slide speed. Also, according to various example embodiments, the friction force may follow different behavior for stopping (e.g., decelerating) the page. For example, the friction force for deceleration may be stronger than the friction force for accelerating the page.
In “inertial page” mode, the system may implement haptic textures based on (e.g., in proportion to) the relative slide speed between the finger and the page. For example, if the finger is stuck to the page (e.g., moving with the same speed and direction as the page), no texture is generated by the system.
In accordance with certain example embodiments, the system may implement a “flick for scroll” mode in which no scrolling of page content occurs until the moving finger of the user is lifted (e.g., flicked or flung) and finger speed during the lift is determined. Then, the page content may scroll at a speed that depends on the finger speed and in a direction that depends on the finger direction at the lifting of the finger. Thus, the system may enable a static exploration mode (e.g., “touch and lift”) to be the default, and various elements of page content (e.g., links) may be always tactilely perceivable (e.g., “feelable”) with texture generation.
With texture generation, haptified browsing (e.g., of web pages or other page content) may be implemented without the user learning any new gestures. The system may implement an exploration mode with a static or mostly static page that has haptic feelable elements, and the page may be scrolled with a single slide gesture. Moreover, within a user interface, texture generation may be implemented by the system to enable haptified list browsing or haptified movement of any one or more control elements in an application (e.g., in the user interface). On a virtual keyboard, one or more keys (e.g., home keys, such as, “f” and “j”) may be haptified for quick recognition by touch.
Some example embodiments of the system (e.g., haptic device) configured for texture generation to be described as a “feelscreen” with “feel scrolling.” As noted above, as a user scrolls the contents of a user interface (e.g., a web page, an email list, or an array of application icons operable to launch applications), the user may feel the contents as informative and pleasant textures (e.g., crisp edges) as his finger moves across the screen of the system. In certain example embodiments, as the user's finger moves across the screen, the screen image moves with the finger, but at a slightly slower speed, thus allowing the user to feel the area of the screen over which the finger is crossing. Hence, when a user slides a finger 10 cm on the screen, in either a horizontal direction or a vertical direction, the underlying content of a virtual page (e.g., a virtual page larger than the screen) under the finger may move only 5 cm. However, smoother operation of the page may be attained by taking into account more complicated finger movements and accelerations. For example, a flicking motion may cause the underlying page to roll or scroll, even after the finger is no longer touching the screen.
Accordingly, “feel scrolling” may improve the accuracy and usability of the user interface. For example, “feel scrolling” may enable the user to sense (e.g., virtually “see”) with his fingertip what is under his finger. Hence, the user may obtain information on whether his finger is on top of a link (e.g., an image that is a link), and accordingly, the user may avoid an accidental selection of that link. As another example, it may enable localization of a small object (e.g., element) and enable selection of the small objects without lifting the finger (e.g., by detecting a tap performed with another finger). In some example embodiments, the “touch and tap keyboard” discussed above may be implemented in conjunction with texture generation. The increased tactility of a device that implements “feel scrolling” may increase the personal connection that a user has with the device. The improved usability of the device may increase user satisfaction with the device. Content presented on the device may be perceived as being more engaging compared to content presented on a device without “feel scrolling.” Thus, “feel scrolling” may provide a new sensory channel for various applications that may be executed by the device (e.g., via a software development kit for games or other applications).
As shown in
The touch sensor 1432 and the haptic display 1434 may form all or part of a haptic touch-sensitive display 1422. The touch sensor 1432 is configured (e.g., through its constituent hardware, its embedded software, or both) to detect contact by the body member 120. In particular, the touch sensor 1432 may provide other components of the electronic device 1400 with contact information that describes a contact (e.g., a touch or a movement) made by the body member 120 (e.g., on the haptic touch-sensitive display 1422). For example, the contact information may be or include a contact location and time 1442 that describes a location (e.g., on the touch sensor 1432, on the haptic touch-sensitive display 1422, or both) and a time at which the contact by the body member 120 was detected by the touch sensor 1432.
A processor 1424 may be included in the device 1400, and the processor 1424 may be configured to access information from other components of the electronic device 1400. As shown in
A haptic processor 1428 may be included in the electronic device 1400, and haptic processor 1428 may be configured (e.g., by software, such as all or part of the application instructions 1436) to access information from other components of the electronic device 1400. In some example embodiments, the haptic processor 1428 is included in the processor 1424.
The haptic display 1434 may be a touch-screen display or a touch-pad display. In the example shown in
In certain example embodiments, the visual information represents display content (e.g., a webpage) that is presentable on a screen, whether visually perceptible or not, and the haptic information represents tactilely perceivable content (e.g., a feelable link or other element in the webpage) within the display content or located coincident with a portion of the display content, where the tactilely perceivable content may be presentable on a haptic device or haptic interface to a device. Hence, the visual information may include an element (e.g., a portion of the visual information) that is visually perceivable (e.g., a link or image), and the haptic information may render this element tactilely perceivable (e.g., as a texture). As noted above, some example embodiments of the visual information may include an element that is visually imperceptible (e.g., “hidden”), and the haptic information may render this element tactilely perceivable (e.g., for discovery by feel, but not by sight).
Accordingly, based on the application data 1438 and execution of the application instructions 1436, the processor 1424 may generate the display signal 1448. Similarly, based on the contact location and time 1442, the keyboard configuration 1446, the haptic effects library 1447, or any suitable combination thereof, the haptic processor 1428 may generate the haptic effect signal 1449. The display driver 1430 may receive the display signal 1448 and the haptic effect signal 1449 and use these signals to fully or partially control the haptic display 1434.
Thus, the electronic device 1400 may operate to present visual information, haptic information, or both, on the haptic touch-sensitive display 1422, based on the contact location and time 1442. According to various example embodiments, multiple instances of the contact location and time 1442 correspond to multiple touches (e.g., taps or presses) or movements (e.g., flicks, slides, or drags) from one or more body members (e.g., body member 120), an electronic device 1440 may present visual information, haptic information, or both, based on these multiple instances of the contact location and time 1442. For example, two fingers (e.g., a left thumb and a right index finger) may constitute body members that provide multiple instances of the contact location and time 1442. As another example, three fingers (e.g., a left thumb, a right index finger, and a right middle finger) may constitute body members that provide such multiple instances of the contact location and time 1442.
The visual information specified by the display signal 1448 is merely an example of information that may be presented (e.g., display) by the haptic display 1434. Such information presented by the haptic display 1434 need not be visual (e.g., visually perceptible or visually imperceptible), but rather may be any type of presentable information. According to various example embodiments, the visual information may be replaced or supplemented with auditory information (e.g., sounds), tactile information (e.g., haptic effects), olfactory information (e.g., scents), flavor information (e.g., tastes), or any suitable combination thereof.
The method 9000 is shown as including operations 9010, 9020, and 9030. In operation 9010, a sensor (e.g., touch sensor 1432, a motion sensor like Kinect® by Microsoft®, a depth sensor, or any suitable combination thereof) generates contact information (e.g., contact location and time 1442) that describes a contact (e.g., a touch or movement) by a body member (e.g., body member 120) with the haptic device. As noted above, the haptic device may be configured to present visual information (e.g., information content, screen content, page content, or a web page), for example, via a touch-sensitive display (e.g., haptic touch-sensitive display 1422).
In operation 9020, a processor (e.g., haptic processor 1428) generates a haptic effect signal (e.g., haptic effect signal 1449) that specifies haptic information corresponding to an element included in the visual information (e.g., a link on a webpage or an image in a document). This haptic effect signal may be generated based on the contact information (e.g., contact location and time 1442) discussed above with respect to operation 9010.
In operation 9030, a display (e.g., haptic display 1434) presents the haptic information specified by the haptic effect signal generated in operation 9020. The presenting of the haptic information causes the element included in the visual information to be tactilely perceivable (e.g., by the body member 120 or another body member). Performance of the method 9000 may have the effect of initiating a “hold-and-feel” mode or an “explore” mode in which one or more body members (e.g., body member 120) may contact a touch screen of the device and tactilely perceive one or more elements presented in or with the visual information.
Any of the components, machines, systems, or devices shown or discussed with respect to
Any one or more of the modules or components described herein may be implemented using hardware (e.g., a processor of a machine) or a combination of hardware and software. For example, any module or component described herein may configure a processor to perform the operations described herein for that module. Moreover, any two or more of these modules or components may be combined into a single module or component, and the functions described herein for a single module or component may be subdivided among multiple modules or components.
The machine 1900 includes a processor 1902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), or any suitable combination thereof), a main memory 1904, and a static memory 1906, which are configured to communicate with each other via a bus 1908. The machine 1900 may further include a graphics display 1910 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)). The machine 1900 may also include an alphanumeric input device 1912 (e.g., a keyboard), a cursor control device 1914 (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), a storage unit 1916, a signal generation device 1918 (e.g., a speaker), and a network interface device 1920.
The storage unit 1916 includes a machine-readable medium 1922 on which is stored the instructions 1924 embodying any one or more of the methodologies or functions described herein. The instructions 1924 may also reside, completely or at least partially, within the main memory 1904, within the processor 1902 (e.g., within the processor's cache memory), or both, during execution thereof by the machine 1900. Accordingly, the main memory 1904 and the processor 1902 may be considered as machine-readable media. The instructions 1924 may be transmitted or received over a network 1926 via the network interface device 1920.
As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 1922 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions for execution by a machine (e.g., machine 1900), such that the instructions, when executed by one or more processors of the machine (e.g., processor 1902), cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory, an optical medium, a magnetic medium, or any suitable combination thereof.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In some embodiments, a hardware module may be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module may be a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module may include software encompassed within a general-purpose processor or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry', or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors.
Similarly, the methods described herein may be at least partially processor-implemented, a processor being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)).
The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.
Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other machine components that receive, store, transmit, or display information. Furthermore, unless specifically stated otherwise, the terms “a” or “an” are herein used, as is common in patent documents, to include one or more than one instance. Finally, as used herein, the conjunction “or” refers to a non-exclusive “or,” unless specifically stated otherwise.
The following enumerated descriptions define various example embodiments of methods and systems (e.g., apparatus) discussed herein:
1. A device comprising:
a sensor configured to generate contact information that describes a contact by a body member with the device as the device presents visual information;
a haptic processor configured to generate a haptic effect signal that specifies haptic information that corresponds to an element included in the visual information being presented by the device, the generating of the haptic effect signal being based on the contact information that describes the contact by the body member with the device; and
a haptic display configured to present the haptic information specified by the haptic effect signal, the presenting of the haptic information causing the element included in the visual information being presented by the device to be tactilely perceivable by the body member.
2. The device of description 1, wherein:
the sensor is configured to detect the contact by the body member as a touch by the body member on a touch screen of the device.
3. The device of description 1 or description 2, wherein:
the sensor is configured to detect the contact by the body member as a movement of the body member on a surface of a touch screen.
4. The device of any of descriptions 1-3, wherein:
the sensor is configured to detect the contact by the body member as a touch by the body member on a side of the device.
5. The device of any of descriptions 1-4, wherein:
the haptic processor is configured to generate the haptic effect signal in response to the contact information indicating that the contact by the body member exceeds a threshold duration.
6. The device of any of descriptions 1-5, wherein:
the haptic processor is configured to generate the haptic effect signal in response to the contact information indicating that the contact by the body member is stationary with respect to a touch-sensitive display.
7. The device of any of descriptions 1-6, wherein:
the haptic display is configured to present the haptic information while presenting the visual information stationary with respect to a screen of the device.
8. The device of any of descriptions 1-7, wherein:
the sensor is configured to generate further contact information that describes a further contact by the body member on a touch screen of the device during the contact by the body member; and
the haptic processor is configured to generate a display signal that specifies a modification of the visual information based on the further contact by the body member; and
the haptic display is configured to present the modification of the visual information based on the further contact by the body member.
9. The device of description 8, wherein:
the modification of the visual information includes panning the visual information with respect to the touch screen based on a movement of the further body member on the touch screen.
10. The device of description 9, wherein:
the panning of the visual information incompletely follows the movement of the further body member on the touch screen.
11. The device of description 9 or description 10, wherein:
the panning of the visual information is in response to the movement of the further body member on the touch screen exceeding a threshold distance.
12. The device of any of descriptions 1-7, wherein:
the sensor is configured to generate further contact information that describes a further contact by a further body member on a touch screen of the device during the contact by the body member; and
the haptic processor is configured to generate a display signal that specifies a modification of the visual information based on the further contact by the further body member; and
the haptic display is configured to present the modification of the visual information based on the further contact by the further body member.
13. The device of description 12, wherein:
the modification of the visual information includes panning the visual information with respect to the touch screen based on a movement of the further body member on the touch screen.
14. The device of description 13, wherein:
the panning of the visual information incompletely follows the movement of the further body member on the touch screen.
15. The device of description 13 or description 14, wherein:
the panning of the visual information is in response to the movement of the further body member on the touch screen exceeding a threshold distance.
16. The device of any of descriptions 12-15, wherein:
the further contact information describes multiple further contacts by multiple further body members on the touch screen; and
the modification of the visual information includes zooming the visual information with respect to the touch screen based on movements of the further body members on the touch screen.
17. The device of any of descriptions 1-16, wherein:
the element included in the visual information is a key within a virtual keyboard; and
the presenting of the haptic information causes the key within the virtual keyboard to be a tactilely perceivable key in the virtual keyboard.
18. The device of any of descriptions 1-17, wherein:
the element included in the visual information is visually perceptible; and
the presenting of the haptic information causes the visually perceptible element to be a tactilely perceivable element in the visual information.
19. The device of any of descriptions 1-17, wherein:
the element included in the visual information is visually imperceptible; and
the presenting of the haptic information causes the visually imperceptible element to be a tactically perceivable element in the visual information.
20. A method comprising:
generating contact information that describes a contact by a body member with a device as the device presents visual information;
generating a haptic effect signal that specifies haptic information that corresponds to an element included in the visual information being presented by the device, the generating of the haptic effect signal being performed by a haptic processor based on the contact information that describes the contact by the body member with the device; and
presenting the haptic information specified by the haptic effect signal, the presenting of the haptic information causing the element included in the visual information being presented by the device to be tactilely perceivable by the body member.
21. The method of description 20, wherein:
the generating of the haptic effect signal is in response to the contact information indicating that the contact by the body member exceeds a threshold duration.
22. The method of description 20 or description 21, wherein:
the generating of the haptic effect signal is in response to the contact information indicating that the contact by the body member is stationary with respect to a touch-sensitive display.
23. The method of any of descriptions 20-22 further comprising:
generating further contact information that describes a further contact by a further body member on a touch screen of the device during the contact by the body member;
generating a display signal that specifies a modification of the visual information based on the further contact by the further body member; and
presenting the modification of the visual information based on the further contact by the further body member.
24. The method of description 23, wherein:
the modification of the visual information includes panning the visual information with respect to the touch screen based on a movement of the further body member on the touch screen.
25. The method of description 23 or description 24, wherein:
the further contact information describes multiple further contacts by multiple further body members on the touch screen; and
the modification of the visual information includes zooming the visual information with respect to the touch screen based on movements of the further body members on the touch screen.
26. A non-transitory machine-readable storage medium comprising instructions that, when executed by one or more processors of a device, cause the device to perform operations comprising:
generating contact information that describes a contact by a body member with the device as the device presents visual information;
generating a haptic effect signal that specifies haptic information that corresponds to an element included in the visual information being presented by the device, the
generating of the haptic effect signal being performed by the one or more processors of the device based on the contact information that describes the contact by the body member with the device; and
presenting the haptic information specified by the haptic effect signal, the presenting of the haptic information causing the element included in the visual information being presented by the device to be tactilely perceivable by the body member.
This application claims the benefit of U.S. Provisional Patent Application No. 61/506,900, filed Jul. 12, 2011, and U.S. Provisional Patent Application No. 61/647,033, filed May 15, 2012, which applications are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61506900 | Jul 2011 | US | |
61647033 | May 2012 | US |