The described embodiments relate generally to an input device in an electronic device. More particularly, the present embodiments relate to modifying a feedback or a user-perceived feedback of an input device in an electronic device.
Electronic devices can receive user inputs from a variety of different types of input devices, such as a keyboard, a button, a track pad, and a display. Typically, a user experiences a “feel” or feedback associated with the input device when the user provides an input to the input device. For example, a user associates a feedback to a key of a keyboard when the user depresses and releases the key. The sound and feel of the input device are a result of some of the individual mechanical components in the key interacting with one another. This feedback can be based on several factors, such as the force needed to depress the key, the feel of the key as it travels between a rest position and a depressed position, the feel when the key bottoms out (e.g., reaches maximum depression), and/or a sound associated with the depression and/or release of the key.
It is often desirable to reduce the size of an electronic device and minimize machining costs and manufacturing time of such devices. However, as the overall size of the electronic device is reduced, the available space for the input devices is also reduced. Consequently, the internal components of an input device may be reduced in size or eliminated to reduce the overall size, dimension, and/or thickness of the input device. However, the reduction or elimination of components or layer(s) in an input device may negatively affect the feedback of the input device. For example, a keyboard may not provide a user with a desirable amount of tactile response (a “click”) when the user depresses a key. Additionally or alternatively, the sounds produced by the actuation of the individual keys in the keyboard may not produce an optimized or ideal user experience.
Embodiments disclosed herein modify the feedback or user-perceived feedback of an input device with one or more output or sound-generating devices. As used herein, the term “feedback” includes the tactile response or “feel” of the input device and/or the sound(s) produced by the input device during operation of the input device.
In one aspect, an electronic device includes an input device and a sound-generating device. The input device is configured to receive a user input. The input device produces a first pressure wave when the user input is received. In particular, the first pressure wave can be produced before, during, and/or after the actuation of the input device. The sound-generating device is configured to produce a second pressure wave around a time the user input is received. The second pressure wave superimposes on the first pressure wave to produce a third pressure wave that modifies the feedback or the user-perceived feedback of the input device.
In another aspect, a keyboard includes a key and an output device. In some embodiments, the output device is included in a key stack of the key. In other embodiments, the output device is disposed in the keyboard outside of the key stack. The key is configured to receive a key press, and the key produces a first sound when the key press is received. The output device is configured to produce an output around a time the key press is received. In particular, the output can be produced before, during, and/or after the actuation of the key. The output of the output device produces a second sound that interacts with the first sound to modify at least one of a sound or a tactile response of the key.
For example, in some embodiments, a key provides a first feedback output in response to a key press event. An output device is configured to produce a second feedback output in response to the key press event, where the second feedback output modifies the first feedback output. The modification of the first feedback output by the second feedback output can produce a given user-perceived feedback.
In another example, in some embodiments, a key is configured to produce a first feedback output in response to a key press event. An output device is configured to produce a second feedback output in response to the key press event, where the second feedback output modifies at least one of an acoustic or a tactile perception of the key press.
In another aspect, an input device that is configured to receive a user input can include one or more sound-producing components that produce a first sound when the user input is received, and an output device configured to produce a second sound around a time the user input is received. The second sound interacts with the first sound to modify a feedback or a user-perceived feedback of the input device. The second sound can be generated prior to, during, and/or after the user input is received.
For example, in some embodiments an electronic device can include an input device that is configured to receive a user input and to provide a first output in response to the user input. An output device is configured to produce a second output in response to the user input, where the second output modifies or obscures the first output.
In another example, in some embodiments an electronic device can include an input device that is configured to receive a user input and to provide a first user-perceptible output in response to the user input. An output device is configured to produce a second user-perceptible output in response to the user input, where the second user-perceptible output modifies or obscures the first user-perceptible output.
In yet another aspect, a method of operating an electronic device can include determining an identity of a user of the electronic device while the user interacts with the electronic device and detecting a body part approaching or contacting an input device. One or more sounds are produced around a time a user input is received by the input device. The one or more sounds can be generated prior to, during, and/or after the user input is received. The one or more sounds modify a feedback of a user-perceived feedback of the input device.
In another aspect, a method of operating an electronic device includes determining an identity of a location of the electronic device and detecting a body part approaching or contacting an input device. Based on the determined location, one or more sound pressure waves are generated around a time a user input is received by the input device to modify a feedback or a user-perceived feedback of the input device.
In yet another aspect, a method of operating an electronic device includes determining a use condition of the electronic device and retrieving an input device profile associated with the use condition. The use condition is associated with a user interacting with a function, an application program, or a component (e.g., input/output component) of the electronic device. The input device profile can specify, based on the use condition, one or more input devices whose feedback or user-perceived feedback is to be modified, which output device(s) should produce a sound to adjust the feedback or the user-perceived feedback, and how the output device(s) should produce the sound(s) (e.g., the audio file(s) and/or the signal(s) to be received by each input device). A processing device can access the input device profile and cause the specified audio file(s) and/or signal(s) to be transmitted to each specified output device. Based on the input device profile, one or more sounds are produced around a time a user input is received by the input device to modify a feedback or a user-perceived feedback of the input device.
The disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
The use of cross-hatching or shading in the accompanying figures is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the figures. Accordingly, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, element proportions, element dimensions, commonalities of similarly illustrated elements, or any other characteristic, attribute, or property for any element illustrated in the accompanying figures.
Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following descriptions are not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims.
With some input devices, a user associates a feedback to the operation of an input device. The user-perceived feedback can be based on various factors associated with the actuation of an input device including, for example, a sound made by a mechanism of the input device and/or the tactile response or “feel” of the input device when mechanism is actuated. The acoustic and tactile response or feedback of the keys in a keyboard, as they are perceived by the user, are significant factors that affect whether a user likes or dislikes a keyboard. In some cases, the acoustic and tactile responses are a result of the components in each key and/or the interaction of some components with one another. The user may sense or perceive the acoustic and the tactile responses using a combination of touch and acoustic stimuli, and may associate those stimuli as user-perceived feedback of the key actuation.
The user-perceived feedback can be based on several factors, including the force needed to depress the keys, the feel of the keys as they travel between a rest position and a depressed position, the feel when a key bottoms out (e.g., reaches maximum depression), and/or the sounds associated with the depression and/or release of the keys. Acoustic and/or haptic stimuli received around the time the input device is actuated can affect how a person perceives the feedback of the input device. Additionally, in some situations the absence of stimuli affects how the person perceives the feedback of the input device.
The following disclosure relates to modifying a feedback or a user-perceived feedback of an input device using one or more output or sound-generating devices. An output device or a sound-generating device can produce acoustic and/or haptic stimuli around the time an input device is actuated. In particular, an output device can create the acoustic and/or haptic stimuli before, during, and/or after the actuation of the input device. The sound created by the input device and the acoustic and/or haptic stimuli produced by the output or sound-generating device can be heard and/or felt by a user and perceived as a single feedback event or stimulus when the individual sound(s) and stimuli occur within a given time period of each other. The acoustic and/or haptic stimuli can reduce, modify, cancel, or obscure the user-perceived feedback of the input device.
In some embodiments, acoustic and/or haptic output produced by the output device may obscure or mask the feedback produced by the input device. For example, the acoustic and/or haptic output may reduce the perceptibility of the sound or tactile feedback produced by the actuation of the input device. This may help mask or hide an undesirable sound or tactile feedback that is inherent in the feedback produced by the input device. Alternatively, the acoustic and/or haptic output produced by the output device may enhance or amplify an inherent feedback produced by the input device. For example, the acoustic and/or haptic output may improve or increase the perceptibility of inherent or natural feedback produced by the actuation of the input device.
In some embodiments, an input device that is configured to receive a user input produces a first sound pressure wave or sound when the user input is received. To modify the user-perceived feedback of the input device, one or more output devices or sound-generating devices produce a second sound pressure wave or sound around the time the user input is received. The second sound interacts with the first sound to modify the user-perceived feedback of the input device. For example, the second sound pressure wave produced by the sound-generating device(s) may be superimposed on the first sound pressure wave, which results in the user perceiving the feedback of the input device differently. The first and second sounds can combine, and the combined sounds are heard by a user and perceived as one sound when the first and second sounds occur within a given time period of each other. As such, the second sound produced by the output device(s) can occur prior to, during, and/or after the actuation of the input device. The combined sounds affect how a user perceives the feedback of the input device. In particular, the combined sounds can transform or enhance the feedback to a user-preferred feedback.
For example, a first sound or pressure wave can be produced before an input device is actuated and precede or combine with a second sound or pressure wave produced by the input device. The two sounds or the combined sounds (or pressure waves) may be perceived by the user as a third sound or pressure wave that is different from the distinct first and second sounds. The first sound can differ from the second sound or pressure wave in amplitude (e.g., volume or decibels), timing, frequency, and/or phase. For example, the first sound may have a higher decibel level and/or a lower frequency than the second sound.
In another example, a first sound can be produced before an input device is actuated and a second sound generated during or after the input device is actuated. The first sound can precede or combine with a third sound produced by the input device. Similarly, the second sound can follow or combine with the third sound. The first, second, and third sounds (or pressure waves) may be perceived by the user as a fourth sound that differs from the distinct first, second, and third sounds. For example, the first sound may have a higher decibel level and a lower frequency than the third sound and the second sound can have a lower decibel level and occur at the same frequency as the third sound.
For example, in some embodiments, a key in a keyboard is configured to produce a first feedback output in response to a key press event. An output device is configured to produce a second feedback in response to the key press event, where the second feedback modifies the first feedback output or at least one of an acoustic or a tactile perception of the key press.
In some embodiments, the sound produced by the output device(s) cancels or reduces the perceptibility of the sound generated during the actuation of the input device. For example, the undesirable frequencies or sounds produced by an input device during actuation of the input device can be pre-determined. During actuation of the input device, the output device(s) generate one or more sounds that are out of phase with the sounds produced by the input device to attenuate the undesirable sounds in real-time.
These and other embodiments are discussed below with reference to
A keyboard 102 and/or a track pad 104 may each have an associated user-perceived feedback. With respect to the keyboard 102, the keys 106 at least partially extend through an aperture 108 defined in a housing 110 of the electronic device 100. Each key 106 may depress at least partially into the aperture 108 when a user presses the key 106. Typically, a user associates a feedback to a key 106 that can be based on several factors, such as the force needed to depress the key 106, the feel of the key 106 as it travels between a rest position and a depressed position, the feel when the key 106 bottoms out (e.g., reaches maximum depression), and/or the sound associated with the depression and/or release of the key 106. These factors, as well as other possible interactions and sounds, can produce an acoustic response and/or a tactile response associated with the depression and/or release of the key 106, which is perceived as feedback by the user.
The track pad 104 is disposed in an aperture 112 defined in the housing 110 of the electronic device 100. At least a portion of the track pad 104 depresses or deflects when a user presses the track pad 104. For example, a user may depress or deflect a portion of the track pad 104 to perform a “click” or a “double click” that selects an icon displayed on the display 114. Additionally or alternatively, in some embodiments a user can apply a force to a portion of the track pad 104 to submit a force input for an application or function.
Similar to the keys 106, a user associates a feedback with the track pad 104 that can be based on several factors, such as the force needed to depress or deflect a portion of the track pad 104, the feel of the track pad 104 when it moves, the feel when the track pad 104 reaches maximum depression or deflection, and/or the sound associated with the depression, deflection, and/or release of the track pad 104. These factors, as well as other possible interactions and sounds, can produce an acoustic response and/or a tactile response associated with the depression and/or release of the track pad 104, which is perceived as feedback by the user.
As will be discussed in more detail later, one or more output devices (see
As discussed earlier, the following disclosure relates to modifying a feedback or a user-perceived feedback of an input device using one or more sound-generating devices or output devices. The described embodiments are directed at a key in the keyboard (e.g., key 106). However, the disclosed techniques to modify a user-perceived feedback of an input device can be used with other types of input devices, such as the track pad 104, an input button, a switch, a display, and/or a flexible portion of a housing.
In the illustrated embodiment of
A compressible dome 216 is disposed between the key cap 214 and the base 210. In some embodiments, the compressible dome 216 is formed from an elastomeric material, although this is not required. In some embodiments, a compressible dome can be formed with a metal.
When a user presses the key cap 214, the key mechanism 202 collapses and the compressible dome 216 compresses as the key travels between the rest position and a depressed position. In particular, the second ends 212 of the two cross-structures 204, 206 rotate or pivot while the first ends 208 slide along the base 210 during the travel. When the user presses with a sufficient amount of force, the compressible dome 216 collapses and activates the electronic switch circuitry included in the base 210.
At least some of the components within the key stack 200 can interact with one another to produce sounds during actuation of the key (e.g., depression and/or release). For example, sound can be created when a user's finger contacts the key cap 214. Sound may also be generated by the first ends 208 of the cross-structures 204, 206 sliding along the base 210. The compressible dome 216 can produce one or more sounds as it compresses and/or releases. Additionally or alternatively, the compressible dome 216 may generate sound(s) when the compressible dome 216 collapses onto the base 210. Collectively, these sounds form one or more sounds or pressure waves that are perceived by a user as a feedback of the key.
Other factors that can contribute to the feedback of the key are the force needed to depress the key (e.g., 206, 206, 214 and 216 in
In the illustrated embodiment, a sensor 218 and an output device 220 are included in the key stack 200 and used to modify the user-perceived feedback of the key. The sensor 218 is configured to detect an object (e.g., finger) approaching and/or contacting the key cap 214. Data or signals from the sensor 218 can be used to trigger or activate the output device 220.
The sensor 218 is shown adjacent to or attached to the underside of the key cap 214, although this is not required. Any suitable type of sensor may be used. For example, the sensor 218 may be a proximity sensor that is positioned below or adjacent the key cap 214. Alternatively, the sensor 218 can be discrete proximity sensors that are positioned at different locations around and/or below the key cap 214 or within the key stack 200.
In another example, the sensor 218 may be a touch sensor that is configured to detect a finger approaching and/or contacting the key cap 214. The touch sensor may span the underside of the key cap 214. Alternatively, discrete touch sensors can be positioned at different locations below the key cap 214 and/or within the key stack 200.
The sensor 218 can employ any suitable type of sensing technology. For example, the sensor 218 may be an inductive or capacitive proximity or touch sensor. Alternatively, the sensor 218 can be a photosensor that detects the absence or the reflection of light. For example, the key cap 214 may include an opening that extends through the key cap 214. A photosensor sensor can be positioned within the key stack 200 (e.g., below the opening) to detect light passing through the opening. A finger may cover the opening when the finger approaches and/or contacts the key cap 214, and the reduction or absence of light may be detected by the photosensor.
As described earlier, signals or data from the sensor 218 can be used to trigger or activate the output device 220. The output device 220 is configured to produce one or more sounds around the time the key is actuated. The sound(s) produced by the key and the sound(s) produced by the output device 220 can combine, and the combined sound may be heard by the user and associated with the actuation of the input device when the two sounds occur within a given time period of each other. As such, the sound(s) produced by the output device 220 can occur prior to, during, and/or after the actuation of the key.
Any suitable output device or sound-generating device can be used. For example, in one embodiment the sound-generating device is an acoustic component such as a speaker that outputs a sound (e.g., an acoustic output). In another embodiment, the sound-generating device is an actuator that moves one or more components to produce a sound. Any suitable actuator can be used. For example, an electromagnetic actuator can move one or more components (e.g., a magnet) in response to an electromagnetic field generated by passing an electrical current through a coil. The movement of the component(s) varies based on a direction of the electrical current through the coil and the amount of time the electrical current passes through the coil. The movement can produce a force (e.g., an impulse or an impulse force) and/or a vibration that may or may not be detectable by the user. An impulse may include a single pulse of energy and a vibration may include a series or pulses or oscillating movement. Thus, the actuator can produce a variety of sounds based on the different movements of the component(s).
For example, the output device 308a and/or 308b may be configured as an actuator. One or more actuators can be positioned substantially anywhere in the housing 312. As described earlier, an electromagnetic actuator moves one or more components (e.g. a magnet) in response to a generated electromagnetic field. The movement of the component(s) can produce one or more sounds. For example, the moving component can produce sounds as the moving component moves or slides in one or more directions. Additionally, the moving component may generate sounds when the moving component collides or impacts a housing or a frame that is positioned adjacent to the moving component.
In some embodiments, the actuator(s) may create a haptic output that may not be detectable by the user. The sound(s) and/or the haptic output can combine with acoustic and/or tactile response of the keys 304 and/or the track pad 306 to modify, enhance, obscure, or cancel the user-perceived feedback of the input device. Example actuators are shown and described in conjunction with
In the illustrated embodiment, one output device 308a (e.g., an actuator) is positioned adjacent the keyboard 302 to produce sound(s) that modify the user-perceived feedback of the keys 304 in the keyboard 302 and/or the track pad 306 when one or more keys 304 or the track pad 306 are actuated by a user. Additionally or alternatively, an output device 308b (e.g., an actuator) is situated adjacent the track pad 306 to produce sound(s) around the time an input device is actuated to modify the user-perceived feedback of the input device (e.g., track pad 306 and/or the keys 304).
In some embodiments, a different type of output device 310a, 310b, and/or 310c can be included in the housing 312 to modify, enhance, obscure, or cancel the user-perceived feedback of the track pad 306 and/or the keys 304 in the keyboard 302. The output device 310a, 310b, and/or 310c may be configured as a speaker that outputs an audio signal or other acoustic output based on one or more audio files stored in a memory (see
In the illustrated embodiment, two output devices 310a (e.g., speakers) are positioned adjacent the keyboard 302 to produce sound(s) that modify the user-perceived feedback of the keys 304 in the keyboard 302 and/or the track pad 306 when one or more keys 304 or the track pad 306 are actuated by a user. Additionally or alternatively, an output device 310b and/or 310c (e.g., actuator) is situated adjacent the track pad 306 to produce sound(s) around the time an input device is actuated to modify the user-perceived feedback of the input device (e.g., track pad 306 and/or the keys 304).
As described earlier, the sound produced by an input device and the sound produced by an output device can combine, and the combined sounds may be heard by a user when the two sounds occur within a given time period of each other. Consequently, an output device 308a-b, 310a-c can generate a sound prior to, during, or after the actuation of an input device.
In some embodiments, data from one or more sensors 314a-b, 316 can be used to trigger at least one output device 308a-b, 310a-c. For example, one or more sensors may detect a finger approaching an input device and the data from the sensor(s) 314a-b, 316 can trigger or activate an output device to generate a sound prior to, during, or after the actuation of the input device. Additionally or alternatively, one or more sensors 314a-b, 316 may detect a finger contacting an input device and the signals from the sensor(s) 314a-b, 316 can activate an output device to generate one or more sounds during or after the actuation of the input device.
The sensor(s) 314a-b, 316 can be situated substantially anywhere in the electronic device 300. In the illustrated embodiment, the one or more sensors 314a-b, 316 are disposed in the housing 312 of the electronic device 300 independent of (separate from) an input device. The sensor(s) 314a-b, 316 may be any suitable sensor configured to sense an object at a distance (e.g., over or on a key 304 or the track pad 306). Such sensors include, but are not limited to proximity, presence-sensing, photoelectric, and/or image sensors.
For example, in the illustrated embodiment the sensor 314a can be configured as one or more proximity sensors that detect a finger approaching and/or contacting the track pad 306. The one or more proximity sensors can be situated at any suitable location within or adjacent the track pad 306.
Additionally or alternatively, in another example the sensor 314b may be one or more presence-sensing sensors that detect one or more fingers approaching or contacting the keyboard 302 (e.g., one or more keys 304). The one or more presence-sensing sensors can be located at any suitable position in the electronic device 300. In the illustrated embodiment, a presence-sensing sensor 314b is disposed adjacent the keyboard 302.
In another example embodiment, a sensor 316 can be one or more image sensors that are positioned adjacent the display 318 to capture images of the keyboard 302 and/or the track pad 306. The images may be analyzed to detect one or more fingers approaching and/or contacting the keyboard 302 (e.g., keys 304) and/or the track pad 306. In other embodiments, one or more image sensors can be located at any suitable position in the electronic device 300.
In some embodiments, one or more sensors 320 and/or one or more output devices 322 can be included in the keyboard 302 outside of a key stack of a key 304. In such embodiments, the sensor(s) 320 may detect a finger approaching and/or contacting the keyboard 302. Signals or data from the sensor(s) 320 can activate the output device(s) to produce sound prior to, during, and/or after the actuation of the keys 304 in the keyboard 302.
In some embodiments, one or more sensors 324 can be used to detect a characteristic of the environment in which the electronic device 300 is operating within. The data from the sensor(s) 324 may be used to determine a location of the electronic device or the level of sound in the environment in which the electronic device is situated. For example, the sensor(s) 324 may be a microphone that collects audio data of the location or the sound level. Data from the sensor(s) 324 can be used to activate and/or modify the operation of an output device to generate a sound based on the actuation of an input device.
Additionally or alternatively, one of the sensors 314a-b, 316 can be used to determine a location of the electronic device. In one non-limiting example, an image sensor (e.g., sensor 316) may capture one or more images that are analyzed to determine the location of the electronic device 300. As will be described in more detail later in conjunction with
As discussed earlier, any suitable output device or sound-generating device can be used. As one example, the sound-generating device is an acoustic device such as a speaker that outputs an audio signal or other acoustic output. The acoustic output precedes, follows, or combines with the sound(s) produced by the input device during actuation to modify, enhance, obscure, or cancel a user-perceived feedback of the input device.
In another example, the output device is an actuator that produces haptic output based on the actuation of the input device. The haptic output may be movement, a force, and/or a vibration based on the actuation of the input device. The actuator can create one or more sounds while producing the movement, force, and/or vibrations. The sound(s) and/or haptic stimuli produced by the actuator precedes, follows, or combines with the sound(s) produced by the input device during actuation to modify, enhance, obscure, or cancel a user-perceived feedback of the input device.
In some embodiments, the movement, force and/or vibrations are not detectable by a user. For example, the haptic output can be an impulse caused by a first component impacting or striking a second component in the electronic device. A user may not feel or detect the impulse, but the user can hear the sound produced when the first component impacts the second component.
Alternatively, the movement, force and/or vibrations can be detectable by a user and may be used to modify the tactile “feel” or feedback of an input device, such as a key in a keyboard, a button, and any other input device that a user touches or presses. For example, a haptic output can be an impulse caused by a first component impacting or striking a second component in the electronic device. A user may feel or detect the impulse, which causes the user to perceive a modified tactile feedback of the input device.
Initially, a determination is made at block 500 as to whether an object approaching the input device is detected. Example objects include a finger, a stylus, or other pointing device. If not, the process waits at block 500. When an object is approaching an input device and the approaching object is detected, the method passes to block 502 where one or more output devices prepare to generate an acoustic stimulus and/or a haptic stimulus based on the type of input device the object is approaching. For example, one or more output devices that are within a key, within a keyboard, and/or adjacent the keyboard may be used when an object is approaching the key in the keyboard.
Next, as shown in block 504, a determination is made as to whether the output device(s) that prepared at block 502 are to produce sound(s) prior to the actuation of the input device. If not, the process passes to block 508. When sound is to be produced prior to the actuation of the input device, the output device(s) that prepared at block 502 generate the acoustic and/or haptic stimuli at block 506 to modify the feedback or the user-perceived feedback of the input device.
If the output device(s) that prepared at block 502 generate the haptic and/or acoustic stimuli at block 506, or if it is determined the output device(s) will not produce an output at block 504, the method passes to block 508. At block 508 a determination is made as to whether the output device(s) that prepared at block 502 are to produce acoustic and/or haptic stimuli during the actuation of the input device. If not, the process passes to block 512. When acoustic and/or haptic stimuli is to be produced during actuation of the input device, the output device(s) that prepared at block 502 generate the acoustic and/or haptic output at block 510 to modify the feedback or the user-perceived feedback of the input device. The acoustic and/or haptic stimuli can be the same or different sound(s) that were produced at block 506.
If the output device(s) that prepared at block 502 generate the haptic and/or acoustic stimuli at block 510, or if it is determined the output device(s) will not produce an output at block 508, the method passes to block 512. At block 512 a determination is made as to whether the output device(s) that prepared at block 502 are to produce one or more outputs after actuation of the input device. If not, the process returns to block 500. When acoustic and/or haptic stimuli is to be produced after actuation of the input device, the output device(s) that prepared at block 502 generate the output(s) at block 514 to modify the feedback or the user-perceived feedback of the input device. The acoustic and/or haptic stimuli can be the same or different output(s) that were produced at block 506 and/or at block 510.
In some embodiments, the one or more sounds or haptic outputs produced by the output device may precede, follow, or combine with the sound(s) generated by the input device, which results in the user perceiving the feedback of the input device differently. The acoustic and/or haptic stimuli and the input device sound(s) are heard and/or felt by a user and associated with the input device as feedback when the two sounds occur within a given time period of each other.
In other embodiments, the one or more outputs produced by the output device(s) cancels the sound(s) (or some of the sound) generated during the actuation of the input device. For example, the undesirable frequencies or sounds produced by an input device during actuation of the input device can be pre-determined. During actuation of the input device, the output device(s) generate one or more sounds that are out of phase with the sounds produced by the input device to attenuate the undesirable sounds in real-time.
The process of
For example, when a user is typing on a keyboard, one or more characteristics of the typing can be used to identify the user. Characteristics such as the typing speed, the force applied to the keys, and/or the manner of typing (e.g., pauses in between key strikes) may be used to identify the user. Alternatively, the applications accessed by the user, and the manner in which the user interacts with the applications can be used to determine the user's identity.
In some embodiments, one or more sensors can be used to determine the identity of the user. For example, a biometric sensor can capture biometric data as the user interacts with the electronic device. The biometric data may be used to identify the user. In another example, an image sensor can capture an image of a user and the user may be identified based on an analysis of the image (e.g., facial recognition program).
In other embodiments, an identifier associated with the user can be used to determine the identity of the user. For example, a password or a pin that a user enters to access the electronic device, a website, or an application may be used to identify the user. Alternatively, a user may enter his or her identity into the electronic device (e.g., via a software program).
Next, as shown in block 604, an input device profile associated with the identified user is obtained. As described earlier, the input device profile can specify one or more input devices whose feedback or user-perceived feedback is to be modified, which output device(s) should produce acoustic and/or haptic stimuli to adjust the feedback or the user-perceived feedback, and how the output device(s) should produce the output(s) (e.g., the audio file(s) and/or the signal(s) to be received by each input device). A processing device can access the input device profile and cause the specified audio file(s) and/or signal(s) to be transmitted to each specified output device.
A determination is then made at block 606 as to whether an object (e.g., a finger or stylus) is detected approaching and/or contacting an input device. If not, the process waits at block 606. When an object is approaching and/or contacting an input device, the method passes to block 608 where one or more output devices generate acoustic and/or haptic stimuli based on the input device profile. The acoustic and/or haptic stimuli modifies the feedback or the user-perceived feedback of the input device. In some embodiments, block 608 can be replaced with one or more of the blocks 504, 506, 508, 510, 512, and 514 in
In some embodiments, a first user may prefer the feedback of an input device to be greater or more noticeable (perceivable) than a second user of the same input device. In such embodiments, a haptic stimulus can be increased for the first user and lowered for the second user. Additionally or alternatively, an acoustic stimulus can be lowered for the first user and not produced for the second user.
Initially, a user interacts with an electronic device at block 700. As the user interacts with the electronic device, one or more characteristics of the surrounding environment and/or the use condition of the electronic device may be determined (block 702). For example, the environment may be a quiet environment, such as in a library, a conference room, or a home office. Alternatively, the environment can be a noisier environment, such as in a coffee shop, a manufacturing facility, or an airport terminal. The environment sounds can be detected with one or more sensors in the electronic device. For example, a microphone can capture audio of the environment.
The identity of the environment can be determined using a variety of techniques. For example, one or more sensors may be used to determine the location. An image sensor can capture an image of a location and the location may be identified based on an analysis of the image (e.g., image recognition program). Additionally or alternatively, a microphone may capture sounds of the environment and a processing device can analyze the audio data to determine a location's identity. In some embodiments, a navigation sensor, such as a global positioning sensor, can be used to determine the identity of a location.
In other embodiments, a user may enter an identity of the environment or location into the electronic device (e.g., via a software program) to identify the location.
Additionally or alternatively, the components, application programs, and/or functions of an electronic device that a user is interacting with (“use condition”) can be determined. For example, the user may have headphones plugged into a headset port. The user may be using headphones because he or she is in a noisier environment, listening to audio, or watching a video. Alternatively, a user may be using an assistive technology that provides additional accessibility to an individual who has physical or cognitive challenges. Example assistive technologies include, but are not limited to, software or hardware text-to-speech or speech synthesizers, a modified keyboard, a speech or voice recognition software application program, a screen reader, or a TTY/TDD conversion modem.
Next, as shown in block 704, an input device profile associated with the identified location or use condition is obtained. The input device profile can specify, based on the location and/or use condition, one or more input devices whose feedback or user-perceived feedback is to be modified, which output device(s) should produce haptic and/or acoustic stimuli to adjust the feedback or the user-perceived feedback of the input device, and how the output device(s) should produce the acoustic and/or haptic stimuli. A processing device can access the input device profile and cause the specified audio file(s) and/or signal(s) to be transmitted to each specified output device.
A determination is then made at block 706 as to whether an object (e.g., finger or stylus) is detected approaching and/or contacting an input device. If not, the process waits at block 706. When an object is approaching and/or contacting an input device, the method passes to block 708 where one or more output devices generate acoustic and/or haptic stimuli to modify the feedback or the user-perceived feedback of the input device. As described earlier, a processing device can access the input device profile and cause appropriate inputs to be received by the specified output device(s). For example, specified audio file(s) and/or signal(s) may be transmitted to each specified output device.
In some embodiments, a haptic stimulus may be increased when the user is in a coffee shop or a manufacturing site. The increased haptic stimulus may be felt by a user and/or produce a sound that is heard by the user. The increased haptic stimulus, along with the noisy environment, can cause the perceived feedback of an input device to remain substantially consistent as perceived by the user.
Alternatively, a haptic stimulus can be increased when the user is wearing ear plugs. In such situations, a user may want his or her perceived feedback of an input device (e.g., the feel of the keys in a keyboard) to remain substantially consistent. The increased haptic stimulus, along with the absence of sound produced by the ear plugs, can maintain the perceived feedback of the input device at a regular or expected level of feedback.
Thus, in some environments, it may be more difficult for a user to detect a user-perceived acoustic feedback of an input device. In such environments, the acoustic and/or haptic stimuli produced by an output device can be modified (e.g., increased) to improve the perceptibility of the user-perceived acoustic feedback. Similarly, in other environments, it may be more difficult for a user to detect a user-perceived haptic feedback of an input device. Accordingly, the acoustic and/or haptic stimuli produced by an output device can be modified (e.g., increased) to improve the perceptibility of the user-perceived haptic feedback.
Thus, the acoustic and/or haptic stimuli produced by an output device can be adaptive, where the acoustic and/or haptic stimuli are selected as a function of a user preference and/or environmental conditions (e.g., background noise and/or vibration). In some implementations, the acoustic and/or haptic stimuli can be adaptive based on how a user is using an input device. For example, if a user is typing on a keyboard with a higher level of force (e.g., more forceful presses on the keys of the keyboard), the acoustic and/or haptic stimuli can be increased to modify the user-perceived feedback of the keyboard.
Additionally, as described earlier, an electronic device can be configured to detect one or more characteristics of the environment in which an electronic device is being used. As described earlier, the electronic device can include one or more sensors coupled to a processing device. The sensor(s) can be configured to detect the characteristic(s) of the environment, such as sound, vibration, temperature, and the like. For example, signals received from an accelerometer can be used by a processing device to determine the electronic device is moving (e.g., based on detected vibrations). Additionally, signals from a microphone may be used by the processing device to determine a location of the electronic device (e.g., based on sounds). As example situations, the signals from the accelerometer and the microphone can be used to determine the electronic device is on a train or bus. Alternatively, the signals from the accelerometer and the microphone can be used to determine the user is wearing the electronic device while the user is moving (e.g., running). Based on that determination, an acoustic stimulus of an output device can be produced to enhance the acoustic feedback of the electronic device and assist the user in perceiving the acoustic feedback of the electronic device.
In some embodiments, block 708 can be replaced with one or more of the blocks 504, 506, 508, 510, 512, and 514 in
Initially, a user interacts with an electronic device at block 800. As the user interacts with the electronic device, the ambient sound level may be determined. For example, the ambient sound level can be established based on data from one or more sensors (block 802). For example, a microphone can capture audio of a location and the sound level at the location may be determined. Additionally or alternatively, a sound measurement application running on the electronic device may be used to measure the decibel or noise level at the location.
Next, as shown in block 804, a determination is made as to whether an object (e.g., finger) is detected approaching and/or contacting an input device. If not, the process waits at block 804. When an object is approaching and/or contacting an input device, the method passes to block 806 where, based on the determined sound level, one or more output devices generate acoustic and/or haptic stimuli to modify the feedback or the user-perceived feedback of the input device. In some embodiments, the operation of one or more output devices can be adjusted based on the ambient sound level. For example, when the output device is a speaker, the volume and/or the audio file that is played can be changed based on the ambient sound levels. Alternatively, characteristics of an electric current that is received by an electromagnetic actuator can be modified based on the ambient sound level. Characteristics such as the frequency, timing, amplitude, and/or phase of an electric current can be changed to cause the actuator to produce different acoustic and/or haptic stimuli.
In some embodiments, a haptic stimulus may be increased when the ambient sound level is high. The increased haptic stimulus may be felt by a user and/or produce a sound that is heard by the user. The increased haptic stimulus, along with the noisy environment, can cause the perceived feedback of an input device to remain substantially consistent as perceived by the user. Additionally or alternatively, an acoustic stimulus may be increased when the ambient sound level is high. The increased acoustic stimulus may be heard by the user, which causes the perceived feedback of an input device to remain substantially consistent as perceived by the user.
Alternatively, haptic and/or acoustic stimuli can be decreased when the ambient sound level is low. In some situations, a user may want the feedback of an input device (e.g., the sounds produced by the keys in a keyboard), to be lower in a quieter environment. The decreased haptic and/or acoustic stimuli, along with the quiet environment, can lower or reduce the perceived feedback of the input device.
In some embodiments, block 806 can be replaced with one or more of the blocks 504, 506, 508, 510, 512, and 514 in
In other embodiments, the acoustic stimuli produced by an output device is designed to cause the frequency, amplitude, or overall signal output of the combined output to be too low to be perceived by a user. For example, the acoustic stimulus or stimuli produced by one or more output devices may cancel the first sound produced by the input device.
In
The plot 904a represents the masking threshold needed to mask or modify the sound(s) 902a. As shown in
The second sound(s) (bar 910) can reduce, modify, enhance, or obscure the feedback or the user-perceived feedback of the input device (as represented by bar 902c). In some embodiments, the one or more second sounds or sound pressure waves (bar 910) may combine with, or be superimposed on, the first sound(s) or sound pressure wave(s) (bar 902c), which results in the user perceiving the feedback of the input device differently. The first and second sounds (or sound pressure waves) are heard by a user and associated with the input device as feedback when the first and second sounds occur within a given time period of each other. Plot 904c represents the combined feedback or user-perceived feedback.
In other embodiments, the one or more second sounds (or sound pressure waves) produced by the output device(s) cancel the sound(s) (or some of the sound) generated during the actuation of the input device. For example, the undesirable frequencies or sounds produced by an input device during actuation of the input device can be pre-determined. During actuation of the input device, the output device(s) generate one or more sounds that are out of phase with the sounds produced by the input device to attenuate the undesirable sounds in real-time.
Based on the actuation of the input device 1010, the electromagnetic actuator 1006 can be activated or actuated to move the mass 1004 towards the housing 1002. In some cases, a haptic output is produced due to the movement of the mass 1004. In some cases, a haptic output is produced due to an impact between the mass 1004 and the housing 1002. The haptic output produced by the haptic device 1000 may not be directly perceptible by the user. For example, the haptic output, alone and not in combination with another output or stimulus, may not be readily perceptible to the user. This haptic and/or acoustic output can reduce, modify, enhance, or obscure the feedback or the user-perceived feedback of the input device 1010, in accordance with the embodiments described herein.
In
The circuit layer 1104 is configured to transmit electrical signals to the piezoelectric actuator 1100 to cause the piezoelectric actuator to move or vibrate. Based on the actuation of the input device 1110, the piezoelectric actuator 1100 can be activated (receive electrical signals) to cause the piezoelectric actuator 1100 to move or vibrate. The movement of the piezoelectric actuator 1100 may produce a detectable or non-detectable haptic output. This haptic output can reduce, modify, enhance, or obscure the feedback or the user-perceived feedback of the input device 1110.
Although the embodiments are described herein in conjunction with a trackpad and a keyboard, other embodiments are not limited to these types of input devices. The present invention can be implemented in a variety of user input devices, including, but not limited to, a joystick, an input button, a dial, a mouse, a stylus, a knob, a rotatable steering device, a touchscreen, a rocker switch, scanners or sensors (e.g., fingerprint sensor), and/or a movable selector or switch. When interacting with an input device (e.g., submitting an input that provides data and/or control signals), a user can associate a feedback to the input device that may be based on several factors, such as the force needed to submit an input, the feel of the input device as it responds to the submitted input (e.g., movement), and/or the sound associated with the submitted input. These factors, as well as other possible interactions and sounds, can produce an acoustic response and/or a tactile response that is associated with the input device, which is perceived as feedback by the user. The tactile response can include haptic rendered taps that vary in duration and/or intensity, textures, and/or varying amounts of simulated friction. This user-perceived feedback or output (e.g., acoustic and/or tactile output) may be due to the natural response of the input device. As described with respect to the embodiments described herein, the user's perception of the natural response of the input device may be enhanced, amplified, masked, obscured, or canceled using at least some of the techniques disclosed herein.
For example, in one embodiment, an input device is configured as a stylus or a digital pen. The stylus can be in communication with electronic device through contact (e.g., to a touchscreen), a wired connection, and/or a wireless connection. A user can submit inputs to an electronic device using any suitable technique. Example techniques include, but are not limited to, touching, tapping, and/or pressing the stylus to a surface of the electronic device (e.g., to a touchscreen), pressing a button in the stylus, hovering the stylus over a surface of the electronic device (e.g., over an icon on a touchscreen), and/or by pressing and/or tilting the stylus. A feedback of the stylus can be modified using one or more output or sound-generating devices. An output device can produce acoustic and/or haptic stimuli around the time a user submits an input through the stylus. The output device can create the acoustic and/or haptic stimuli before, during, and/or after the actuation of the input device. The sound or tactile feedback created by the stylus and the acoustic and/or haptic stimuli produced by the output or sound-generating device can be heard and/or felt by a user and perceived as a single feedback event when the individual sound(s) and stimuli occur within a given time period of each other. The acoustic and/or haptic stimuli can reduce, modify, cancel, amplify, enhance, or obscure the user-perceived feedback of the stylus.
In another example, an input device is configured as mouse that is used to control a cursor or movable indicator displayed on a screen. A user can submit inputs to the electronic device by moving the mouse to move the movable indicator to a graphical element (e.g., an icon) displayed on the screen and pressing a button on the mouse, applying a force or pressure to a particular region of the mouse surface, and/or by hovering the movable indicator over the graphical element. The feedback of the mouse can be based on the sounds and/or the tactile feedback of the mouse as the mouse is moved across a surface, the force needed to press the button or apply a force to a particular region of the mouse, and/or the response of the mouse to the pressure or force. The acoustic and/or tactile user-perceived feedback may be modified using one or more output or sound-generating devices. An output device can produce acoustic and/or haptic stimuli around the time a user submits an input with the mouse. The sound or tactile feedback created by the mouse and the acoustic and/or haptic stimuli produced by the output device can be heard and/or felt by a user and perceived as a single feedback event when the individual sound(s) and stimuli occur within a given time period of each other. The acoustic and/or haptic stimuli can reduce, modify, cancel, enhance, amplify, or obscure the user-perceived feedback of the mouse.
As another example, the output device can be a material that responds to an input signal or an environmental input. For example, a piezo material can constrict or move in response to an applied electrical signal (see
Additionally or alternatively, the input device 1200 may include one or more sensors 1204. Each sensor 1204 is configured to detect an approaching body part (e.g., finger) or an actuation of the input device 1200. For example, a sensor 1204 can detect the motion of a key in a keyboard (e.g., key 106) when the key is depressed. Alternatively, a sensor 1204 may detect the proximity of a finger to the input device 1200 and/or the finger contacting the input device 1200.
In some embodiments, the electronic device 100 can include one or more output devices 1206 that are separate from the input device 1200. Like the output device(s) 1202, the one or more output devices 1206 are configured to modify the feedback or the user-perceived feedback. In particular, each output device 1202 is a sound-generating device that produces one or more sounds to modify the feedback or the user-perceived feedback.
As one example, an output or sound-generating device is an acoustic device such as a speaker that outputs audio or other acoustic output around the time the input device is actuated. As discussed earlier, the acoustic output can be output before, during, and/or after actuation of the input device. The acoustic output precedes, follows, or combines with the acoustic and/or tactile response(s) of the input device to modify, enhance, obscure, or cancel a feedback or a user-perceived feedback of the input device.
In another example, the output device is an actuator that produces haptic output based on the actuation of the input device (see
In some embodiments, the output device is an electromagnetic actuator. The electromagnetic actuator can move a mass (e.g., a magnet) based on an electromagnetic field that is produced when an electrical signal passes through a conductor. The movement of the mass can produce one or more sounds. For example, the moving mass can produce sounds as the moving mass moves or slides in one or more directions. Additionally, the moving mass may generate sounds when the moving mass strikes or impacts a housing or a frame that is positioned adjacent to the moving mass. Characteristics of an electric current that is received by an electromagnetic actuator can be modified to cause the actuator to produce different sounds. For example, the frequency, amplitude, and/or phase of the electric current can change to produce different acoustic and/or haptic stimuli.
In some embodiments, the actuator can be a piezoelectric actuator that is used to produce movement in a component in the electronic device. The movement of the component may produce one or more sounds that combine or interact with the sound(s) produced by an input device during actuation of the input device. The sound(s) generated by the output device modify, enhance, obscure, or cancel the feedback or the user-perceived feedback of the input device.
In some embodiments, the actuator(s) may create a tactile or haptic output that is or is not detectable by the user. The sound(s) and/or the haptic output can combine with the acoustic and/or tactile response of the keys 304 and/or the track pad 306 to modify, enhance, obscure, or cancel the feedback or the user-perceived feedback of the input device.
Additionally or alternatively, the electronic device 100 can include one or more sensors 1208 that are separate from the input device 1200. Like the sensor(s) 1204, at least one sensor 1208 is configured to detect an approaching body part (e.g., finger) or the actuation of the input device 1200. As described earlier, the sensor(s) 1208 may be positioned substantially anywhere on the electronic device 100. Example sensors include, but are not limited to, an image sensor, a temperature sensor, a light sensor, a proximity sensor, a touch sensor, a force sensor, and an accelerometer.
In some embodiments, additional sensor(s) 1210 can be positioned substantially anywhere on the electronic device 100. The additional sensors 1210 may be configured to sense substantially any type of characteristic, such as, but not limited to, images, pressure, light, touch, force, biometric data, temperature, position, location, motion, and so on. For example, the sensor(s) 1210 may be an image sensor, a temperature sensor, a light sensor, an atmospheric pressure sensor, a proximity sensor, a humidity sensor, a magnet, a gyroscope, a biometric sensor, an accelerometer, a navigation sensor, and so on. In some embodiments, some or all of the sensors 1208 and the additional sensors 1210 can be used for multiple purposes. In other words, some or all of the sensors 1210 can be used to detect an approaching body part (e.g., finger) or the actuation of the input device 1200. Additionally or alternatively, some or all of the sensors 1208 may be used for functions or applications other than the detection of an approaching body part (e.g., finger) or the actuation of the input device 1200.
The electronic device 100 may further include the display 114, one or more processing devices 1212, memory 1214, a power source 1216, one or more input/output (I/O) devices 1220, and a network communications interface 1218. The processing device(s) 1212 can control or coordinate some or all of the operations of the electronic device 100. The processing device(s) 1212 can communicate, either directly or indirectly, with substantially all of the components of the electronic device 100. For example, a system bus or signal line 1222 or other communication mechanism can provide communication between the input device 1200, the output device(s) 1202 and/or 1206, the sensor(s) 1204, 1208, and/or 1210, the processing device(s) 1212, the memory 1214, the power source 1216, the I/O device(s) 1220, and/or the network communications interface 1218. The one or more processing devices 1212 can be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, the processing device(s) 1212 can each be a microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), or combinations of such devices. As described herein, the term “processing device” is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element or elements.
The memory 1214 can store electronic data that can be used by the electronic device 100. For example, the memory 1214 can store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing and control signals, data structures or databases, one or more input device profiles, and so on. The memory 1214 can be configured as any type of memory. By way of example only, the memory can be implemented as random access memory, read-only memory, Flash memory, removable memory, or other types of storage elements, or combinations of such devices.
The power source 1216 can be implemented with one or more devices capable of providing energy to the electronic device 100. For example, the power source 1216 can be one or more batteries or rechargeable batteries. Additionally or alternatively, the power source 1216 may be a connection cable that connects the electronic device to another power source, such as a wall outlet or another electronic device.
The network communication interface 1218 can facilitate transmission of data to or from other electronic devices. For example, a network communication interface can transmit electronic signals via a wireless and/or wired network connection. Examples of wireless and wired network connections include, but are not limited to, cellular, Wi-Fi, Bluetooth, infrared, and Ethernet.
The one or more I/O devices 1220 can transmit and/or receive data to and from a user or another electronic device. The I/O device(s) 1220 can include a touch sensing input surface such as a track pad, one or more buttons, one or more microphones or speakers, one or more ports such as a microphone port, and/or a keyboard.
It should be noted that
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 62/355,632, filed on Jun. 28, 2016, and entitled “Modification of User-Perceived Feedback of an Input Device Using Acoustic or Haptic Output,” which is incorporated by reference as if fully disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
5196745 | Trumper et al. | Mar 1993 | A |
5293161 | MacDonald et al. | Mar 1994 | A |
5424756 | Ho et al. | Jun 1995 | A |
5434549 | Hirabayashi et al. | Jul 1995 | A |
5436622 | Gutman et al. | Jul 1995 | A |
5668423 | You et al. | Sep 1997 | A |
5842967 | Kroll | Jan 1998 | A |
5739759 | Nakazawa et al. | Apr 1998 | A |
6084319 | Kamata et al. | Jul 2000 | A |
6342880 | Rosenberg et al. | Jan 2002 | B2 |
6373465 | Jolly et al. | Apr 2002 | B2 |
6388789 | Bernstein | May 2002 | B1 |
6438393 | Surronen | Aug 2002 | B1 |
6445093 | Binnard | Sep 2002 | B1 |
6493612 | Bisset et al. | Dec 2002 | B1 |
6693622 | Shahoian et al. | Feb 2004 | B1 |
6777895 | Shimoda et al. | Aug 2004 | B2 |
6822635 | Shahoian | Nov 2004 | B2 |
6864877 | Braun et al. | Mar 2005 | B2 |
6952203 | Banerjee et al. | Oct 2005 | B2 |
6988414 | Ruhrig et al. | Jan 2006 | B2 |
7068168 | Girshovich et al. | Jun 2006 | B2 |
7080271 | Kardach et al. | Jul 2006 | B2 |
7126254 | Nanataki et al. | Oct 2006 | B2 |
7130664 | Williams | Oct 2006 | B1 |
7196688 | Shena et al. | Mar 2007 | B2 |
7202851 | Cunningham et al. | Apr 2007 | B2 |
7234379 | Claesson et al. | Jun 2007 | B2 |
7253350 | Noro et al. | Aug 2007 | B2 |
7276907 | Kitagawa et al. | Oct 2007 | B2 |
7323959 | Naka et al. | Jan 2008 | B2 |
7339572 | Schena | Mar 2008 | B2 |
7355305 | Nakamura et al. | Apr 2008 | B2 |
7360446 | Dai et al. | Apr 2008 | B2 |
7370289 | Ebert et al. | May 2008 | B1 |
7392066 | Hapamas | Jun 2008 | B2 |
7423631 | Shahoian et al. | Sep 2008 | B2 |
7508382 | Denoue et al. | Mar 2009 | B2 |
7570254 | Suzuki et al. | Aug 2009 | B2 |
7656388 | Schena et al. | Feb 2010 | B2 |
7667371 | Sadler et al. | Feb 2010 | B2 |
7667691 | Boss et al. | Feb 2010 | B2 |
7675414 | Ray | Mar 2010 | B2 |
7710397 | Krah et al. | May 2010 | B2 |
7710399 | Bruneau et al. | May 2010 | B2 |
7741938 | Kramlich | Jun 2010 | B2 |
7755605 | Daniel et al. | Jul 2010 | B2 |
7798982 | Zets et al. | Sep 2010 | B2 |
7825903 | Anastas et al. | Nov 2010 | B2 |
7855657 | Doemens et al. | Dec 2010 | B2 |
7890863 | Grant et al. | Feb 2011 | B2 |
7893922 | Klinghult et al. | Feb 2011 | B2 |
7904210 | Pfau et al. | Mar 2011 | B2 |
7911328 | Luden et al. | Mar 2011 | B2 |
7919945 | Houston et al. | Apr 2011 | B2 |
7952261 | Lipton et al. | May 2011 | B2 |
7952566 | Poupyrev et al. | May 2011 | B2 |
7956770 | Klinghult et al. | Jun 2011 | B2 |
7976230 | Ryynanen et al. | Jul 2011 | B2 |
8002089 | Jasso et al. | Aug 2011 | B2 |
8020266 | Ulm et al. | Sep 2011 | B2 |
8040224 | Hwang | Oct 2011 | B2 |
8053688 | Conzola et al. | Nov 2011 | B2 |
8063892 | Shahoian | Nov 2011 | B2 |
8081156 | Ruettiger | Dec 2011 | B2 |
8125453 | Shahoian et al. | Feb 2012 | B2 |
8154537 | Olien et al. | Apr 2012 | B2 |
8174495 | Takashima et al. | May 2012 | B2 |
8174512 | Ramstein et al. | May 2012 | B2 |
8169402 | Shahoian et al. | Jun 2012 | B2 |
8217892 | Meadors | Jul 2012 | B2 |
8217910 | Stallings et al. | Jul 2012 | B2 |
8232494 | Purcocks | Jul 2012 | B2 |
8248386 | Harrison | Aug 2012 | B2 |
8253686 | Kyung | Aug 2012 | B2 |
8262480 | Cohen et al. | Sep 2012 | B2 |
8265292 | Leichter | Sep 2012 | B2 |
8265308 | Gitzinger et al. | Sep 2012 | B2 |
8344834 | Niiyama | Jan 2013 | B2 |
8345025 | Seibert et al. | Jan 2013 | B2 |
8351104 | Zaifrani et al. | Jan 2013 | B2 |
8378797 | Pance et al. | Feb 2013 | B2 |
8378965 | Gregorio et al. | Feb 2013 | B2 |
8384316 | Houston et al. | Feb 2013 | B2 |
8390218 | Houston et al. | Mar 2013 | B2 |
8390594 | Modarres et al. | Mar 2013 | B2 |
8400027 | Dong et al. | Mar 2013 | B2 |
8405618 | Colgate et al. | Mar 2013 | B2 |
8421609 | Kim et al. | Apr 2013 | B2 |
8469806 | Grant et al. | Jun 2013 | B2 |
8471690 | Hennig et al. | Jun 2013 | B2 |
8493177 | Flaherty et al. | Jul 2013 | B2 |
8493189 | Suzuki | Jul 2013 | B2 |
8576171 | Grant | Nov 2013 | B2 |
8598750 | Park | Dec 2013 | B2 |
8598972 | Cho et al. | Dec 2013 | B2 |
8604670 | Mahameed et al. | Dec 2013 | B2 |
8605141 | Dialameh et al. | Dec 2013 | B2 |
8614431 | Huppi et al. | Dec 2013 | B2 |
8619031 | Hayward | Dec 2013 | B2 |
8624448 | Kaiser et al. | Jan 2014 | B2 |
8633916 | Bernstein et al. | Jan 2014 | B2 |
8639485 | Connacher et al. | Jan 2014 | B2 |
8648829 | Shahoian et al. | Feb 2014 | B2 |
8654524 | Pance et al. | Feb 2014 | B2 |
8681130 | Adhikari | Mar 2014 | B2 |
8717151 | Forutanpour et al. | May 2014 | B2 |
8730182 | Modarres et al. | May 2014 | B2 |
8749495 | Grant et al. | Jun 2014 | B2 |
8754759 | Fadell et al. | Jun 2014 | B2 |
8760037 | Eshed et al. | Jun 2014 | B2 |
8773247 | Ullrich | Jul 2014 | B2 |
8780074 | Castillo et al. | Jul 2014 | B2 |
8797153 | Vanhelle et al. | Aug 2014 | B2 |
8803670 | Steckel et al. | Aug 2014 | B2 |
8834390 | Couvillon | Sep 2014 | B2 |
8836502 | Culbert et al. | Sep 2014 | B2 |
8867757 | Ooi | Oct 2014 | B1 |
8872448 | Boldyrev et al. | Oct 2014 | B2 |
8878401 | Lee | Nov 2014 | B2 |
8907661 | Maier et al. | Dec 2014 | B2 |
8976139 | Koga et al. | Mar 2015 | B2 |
8981682 | Delson et al. | Mar 2015 | B2 |
8987951 | Park | Mar 2015 | B2 |
9008730 | Kim et al. | Apr 2015 | B2 |
9024738 | Van Schyndel et al. | May 2015 | B2 |
9049339 | Muench | Jun 2015 | B2 |
9052785 | Horie | Jun 2015 | B2 |
9054605 | Jung et al. | Jun 2015 | B2 |
9058077 | Lazaridis et al. | Jun 2015 | B2 |
9086727 | Tidemand et al. | Jul 2015 | B2 |
9092056 | Myers et al. | Jul 2015 | B2 |
9104285 | Colgate et al. | Aug 2015 | B2 |
9122330 | Bau et al. | Sep 2015 | B2 |
9134796 | Lemmons et al. | Sep 2015 | B2 |
9172669 | Swink et al. | Oct 2015 | B2 |
9218727 | Rothkopf et al. | Dec 2015 | B2 |
9245704 | Maharjan et al. | Jan 2016 | B2 |
9256287 | Shinozaki et al. | Feb 2016 | B2 |
9274601 | Faubert et al. | Mar 2016 | B2 |
9280205 | Rosenberg et al. | Mar 2016 | B2 |
9286907 | Yang et al. | Mar 2016 | B2 |
9304587 | Wright et al. | Apr 2016 | B2 |
9361018 | Pasquero et al. | Jun 2016 | B2 |
9396629 | Weber et al. | Jul 2016 | B1 |
9430042 | Levin | Aug 2016 | B2 |
9436280 | Tartz et al. | Sep 2016 | B2 |
9442570 | Slonneger | Sep 2016 | B2 |
9448713 | Cruz-Hernandez et al. | Sep 2016 | B2 |
9449476 | Lynn et al. | Sep 2016 | B2 |
9466783 | Olien et al. | Oct 2016 | B2 |
9489049 | Li | Nov 2016 | B2 |
9496777 | Jung | Nov 2016 | B2 |
9501149 | Burnbaum et al. | Nov 2016 | B2 |
9557857 | Schediwy | Jan 2017 | B2 |
9829981 | Ji | Nov 2017 | B1 |
9875625 | Khoshkava et al. | Jan 2018 | B2 |
9904393 | Frey et al. | Feb 2018 | B2 |
9927902 | Burr et al. | Mar 2018 | B2 |
9940013 | Choi et al. | Apr 2018 | B2 |
10110986 | Min | Oct 2018 | B1 |
10390139 | Biggs | Aug 2019 | B2 |
20030117132 | Klinghult | Jun 2003 | A1 |
20050036603 | Hughes | Feb 2005 | A1 |
20050230594 | Sato et al. | Oct 2005 | A1 |
20060017691 | Cruz-Hernandez et al. | Jan 2006 | A1 |
20060209037 | Wang et al. | Sep 2006 | A1 |
20060223547 | Chin et al. | Oct 2006 | A1 |
20060252463 | Liao | Nov 2006 | A1 |
20070106457 | Rosenberg | May 2007 | A1 |
20070152974 | Kim et al. | Jul 2007 | A1 |
20080062145 | Shahoian | Mar 2008 | A1 |
20080084384 | Gregorio et al. | Apr 2008 | A1 |
20080111791 | Nikittin | May 2008 | A1 |
20090085879 | Dai et al. | Apr 2009 | A1 |
20090115734 | Fredriksson et al. | May 2009 | A1 |
20090166098 | Sunder | Jul 2009 | A1 |
20090167702 | Nurmi | Jul 2009 | A1 |
20090167704 | Terlizzi et al. | Jul 2009 | A1 |
20090174672 | Schmidt | Jul 2009 | A1 |
20090207129 | Ullrich et al. | Aug 2009 | A1 |
20090225046 | Kim et al. | Sep 2009 | A1 |
20090231271 | Heubel et al. | Sep 2009 | A1 |
20090243404 | Kim et al. | Oct 2009 | A1 |
20090267892 | Faubert | Oct 2009 | A1 |
20090313542 | Cruz-Hernandez et al. | Dec 2009 | A1 |
20100116629 | Borissov et al. | May 2010 | A1 |
20100225600 | Dai et al. | Sep 2010 | A1 |
20100231508 | Cruz-Hernandez | Sep 2010 | A1 |
20100313425 | Hawes | Dec 2010 | A1 |
20100328229 | Weber et al. | Dec 2010 | A1 |
20110115754 | Cruz-Hernandez | May 2011 | A1 |
20110128239 | Polyakov et al. | Jun 2011 | A1 |
20110132114 | Siotis | Jun 2011 | A1 |
20110205038 | Drouin et al. | Aug 2011 | A1 |
20110210834 | Pasquero et al. | Sep 2011 | A1 |
20110261021 | Modarres et al. | Oct 2011 | A1 |
20110304550 | Romera Jolliff | Dec 2011 | A1 |
20120038471 | Kim et al. | Feb 2012 | A1 |
20120056825 | Ramsay et al. | Mar 2012 | A1 |
20120062491 | Coni et al. | Mar 2012 | A1 |
20120113008 | Makinen et al. | May 2012 | A1 |
20120127071 | Jitkoff et al. | May 2012 | A1 |
20120127088 | Pance et al. | May 2012 | A1 |
20120223824 | Rothkopf | Sep 2012 | A1 |
20120235942 | Shahoian | Sep 2012 | A1 |
20120319827 | Pance et al. | Dec 2012 | A1 |
20120327006 | Israr et al. | Dec 2012 | A1 |
20130016042 | Makinen et al. | Jan 2013 | A1 |
20130044049 | Biggs et al. | Feb 2013 | A1 |
20130207793 | Weaber et al. | Aug 2013 | A1 |
20130253818 | Sanders et al. | Sep 2013 | A1 |
20130278401 | Flaherty et al. | Oct 2013 | A1 |
20140015777 | Park | Jan 2014 | A1 |
20140062948 | Lee et al. | Mar 2014 | A1 |
20140119569 | Peeler | May 2014 | A1 |
20140125470 | Rosenberg | May 2014 | A1 |
20140168175 | Mercea et al. | Jun 2014 | A1 |
20140218853 | Pance et al. | Aug 2014 | A1 |
20140274398 | Grant | Sep 2014 | A1 |
20150097800 | Grant et al. | Apr 2015 | A1 |
20150116205 | Westerman et al. | Apr 2015 | A1 |
20150126070 | Candelore | May 2015 | A1 |
20150130730 | Harley et al. | May 2015 | A1 |
20150135121 | Peh et al. | May 2015 | A1 |
20150277562 | Bard et al. | May 2015 | A1 |
20150205357 | Virtanen et al. | Jul 2015 | A1 |
20150234493 | Parivar et al. | Aug 2015 | A1 |
20150293592 | Cheong et al. | Oct 2015 | A1 |
20150338919 | Weber et al. | Nov 2015 | A1 |
20150349619 | Degner et al. | Dec 2015 | A1 |
20160011664 | Silvanto et al. | Jan 2016 | A1 |
20160098107 | Morrell et al. | Apr 2016 | A1 |
20160171767 | Anderson et al. | Jun 2016 | A1 |
20160209979 | Endo et al. | Jul 2016 | A1 |
20160293829 | Maharjan et al. | Oct 2016 | A1 |
20160327911 | Eim et al. | Nov 2016 | A1 |
20160328930 | Weber et al. | Nov 2016 | A1 |
20160379776 | Oakley | Dec 2016 | A1 |
20170003744 | Bard et al. | Jan 2017 | A1 |
20170024010 | Weinraub | Jan 2017 | A1 |
20170212591 | Churikov | Jul 2017 | A1 |
20170249024 | Jackson et al. | Aug 2017 | A1 |
20170285843 | Roberts-Hoffman et al. | Oct 2017 | A1 |
20170337025 | Finnan et al. | Nov 2017 | A1 |
20180014096 | Miyoshi | Jan 2018 | A1 |
20180029078 | Park et al. | Feb 2018 | A1 |
20180181204 | Weinraub | Jun 2018 | A1 |
20180194229 | Wachinger | Jul 2018 | A1 |
Number | Date | Country |
---|---|---|
101036105 | Sep 2007 | CN |
101409164 | Apr 2009 | CN |
101663104 | Mar 2010 | CN |
101872257 | Oct 2010 | CN |
214030 | Mar 1983 | DE |
1686776 | Aug 2006 | EP |
2743798 | Jun 2014 | EP |
2004129120 | Apr 2004 | JP |
2004236202 | Aug 2004 | JP |
2010537279 | Dec 2010 | JP |
2010540320 | Dec 2010 | JP |
20050033909 | Apr 2005 | KR |
2010035805 | Oct 2010 | TW |
WO2002073587 | Sep 2002 | WO |
WO2006091494 | Aug 2006 | WO |
WO2007049253 | May 2007 | WO |
WO2007114631 | Oct 2007 | WO |
WO2009038862 | Mar 2009 | WO |
WO2010129892 | Nov 2010 | WO |
WO2013169303 | Nov 2013 | WO |
WO2014066516 | May 2014 | WO |
WO2016091944 | Jun 2016 | WO |
Entry |
---|
U.S. Appl. No. 14/804,930, filed Jul. 21, 2015, pending. |
U.S. Appl. No. 15/055,559, filed Feb. 27, 2016, pending. |
U.S. Appl. No. 15/166,227, filed May 26, 2016, pending. |
U.S. Appl. No. 15/212,890, filed Jul. 18, 2016, pending. |
U.S. Appl. No. 15/253,817, filed Aug. 31, 2016, pending. |
U.S. Appl. No. 15/263,641, filed Sep. 13, 2016, pending. |
U.S. Appl. No. 15/350,592, filed Nov. 14, 2016, pending. |
Hasser et al., “Preliminary Evaluation of a Shape-Memory Alloy Tactile Feedback Display,” Advances in Robotics, Mechantronics, and Haptic Interfaces, ASME, DSC—vol. 49, pp. 73-80, 1993. |
Hill et al., “Real-time Estimation of Human Impedance for Haptic Interfaces,” Stanford Telerobotics Laboratory, Department of Mechanical Engineering, Standford University, 6 pages, at least as early as Sep. 30, 2009. |
Lee et al, “Haptic Pen: Tactile Feedback Stylus for Touch Screens,” Mitsubishi Electric Research Laboratories, http://wwwlmerl.com, 6 pages, Oct. 2004. |
Stein et al., “A process chain for integrating piezoelectric transducers into aluminum die castings to generate smart lightweight structures,” Results in Physics 7, pp. 2534-2539, 2017. |
Author Unknown, “3D Printed Mini Haptic Actuator,” Autodesk, Inc., 16 pages, 2016. |
Number | Date | Country | |
---|---|---|---|
62355632 | Jun 2016 | US |