The described embodiments relate generally to wearable audio devices. More particularly, embodiments relate to a wearable audio device capable of detecting touch and force inputs propagated through a body of a user or other structure.
An earbud is worn at least partially inside of the ear of a user and typically is configured to produce a range of sounds based on a signal from another device. Many traditional earbuds suffer from significant drawbacks that may limit the ability to control sounds, or other outputs, at the earbud. In many cases, the earbud requires a hardwired connection that physically couples the earbud to another device and the sound is controlled based on input received at the device. Further, earbuds and/or other connected devices may be unresponsive to voice commands, thereby limiting the adaptability of the earbud to control multiple types of functions.
Certain embodiments described herein relate to, include, or take the form of a wearable audio device. The wearable audio device may include an enclosure. The wearable audio device may further include a sealing component coupled to the enclosure and configured to engage an ear of a user, thereby forming a sealed passage between an ear canal of the ear and the enclosure. The wearable audio device may further include an input device disposed in the enclosure and coupled to the ear canal by the sealed passage, and configured to detect a signal propagating through a body of the user and provide a detection output. The wearable audio device may further include an audio output device acoustically coupled to the ear canal by the sealed passage and configured to provide an audio output. The wearable audio device may further include a processing unit operably coupled to the input device and the audio output device and configured to receive the detection output from the input device and change the audio output from a first mode to a second mode in response to receiving the detection output.
Other embodiments described generally reference a method. The method includes detecting, by an input device of a wearable audio device positioned in an outer ear of a user, an input comprising an audio signal propagating through a body of the user. The method further includes determining, by a processing unit of the wearable audio device, that the input was generated by an input action on the body of the user, and in response to determining that the input is consistent with the input action at the body of the user, adjusting an output of the wearable audio device in accordance with the input.
Still further embodiments described herein generally reference a system that includes a first input device configured to provide a first detection output in response to detecting a signal propagating through a human body and a second input device configured to provide a second detection output in response to detecting the signal propagating through the human body. The system further includes a processing unit operably coupled to the first and second input devices and configured to analyze the first and second detection outputs to determine that the signal was generated by an input action on the human body. The system further includes an audio output device operably coupled to the processing unit and configured to provide an audio output. In response to determining that the signal corresponds to the input action on the human body, the processing unit is further configured to adjust the audio output.
In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the drawings and by study of the following description.
The disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like elements.
The use of cross-hatching or shading in the accompanying figures is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the figures. Accordingly, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, element proportions, element dimensions, commonalities of similarly illustrated elements, or any other characteristic, attribute, or property for any element illustrated in the accompanying figures.
Additionally, it should be understood that the proportions and dimensions (either relative or absolute) of the various features and elements (and collections and groupings thereof) and the boundaries, separations, and positional relationships presented therebetween, are provided in the accompanying figures merely to facilitate an understanding of the various embodiments described herein and, accordingly, may not necessarily be presented or illustrated to scale, and are not intended to indicate any preference or requirement for an illustrated embodiment to the exclusion of embodiments described with reference thereto.
The description that follows includes systems, methods, and apparatuses that embody various elements of the present disclosure. However, it should be understood that the described disclosure may be practiced in a variety of forms in addition to those described herein.
The present disclosure describes systems, devices, and techniques related to a wearable audio device, such as an earbud or other device, that is configured to detect inputs and change the operation of the wearable audio device in accordance with the inputs. In some embodiments, the wearable audio device is disposed in a structure and detects signals propagating through or within the structure. Various inputs may cause one or more signals to propagate through or within the structure, outside the structure, or some combination thereof. The wearable audio device may determine whether a detected signal was generated by an input and, if so, change its operation in accordance with the input. Examples of structures include an ear, the walls of an ear canal, a head, a user's body, a body part, and the like.
The wearable audio device may be worn at least partially in an ear of a user. When disposed in an ear, the wearable audio device may be coupled to an ear canal of the user, or another part of the user's body. For example, the wearable audio device may form a sealed passage between an ear canal of the user and one or more components of the wearable audio device. As used herein, an “ear canal” of a human body refers to the open space between the outer ear and the ear drum. Accordingly, the wearable audio device may detect signals propagating through or within the user's body, outside the user's body, or some combination thereof. In various embodiments, the signals detected by the wearable audio device correspond to inputs.
The input devices of the wearable audio device detect signals that correspond to inputs (e.g., detect inputs) from users, other devices, and other sources. In various embodiments, inputs are detected without a user directly interacting with (e.g., physically touching) the wearable audio device. Inputs may not be initiated by any action by a user, but instead by other devices or other sources. In various embodiments described herein, users or devices may perform input actions to provide inputs to the wearable audio device. As used herein, an “input action” refers to any action, condition, or the like that can be detected by a wearable audio device and interpreted by the wearable audio device as an input. In various embodiments, one or more input actions may correspond to inputs at the wearable audio device.
In some embodiments, users perform input actions by interacting with the structure in which the wearable audio device is disposed (e.g., a human body). In some embodiments, a user may contact (e.g., tap, swipe, press, or otherwise contact) the structure. For example, the user may contact an exterior surface of his or her body, such as the skin on his or her face. Further examples of input actions include a user clicking his or her teeth together or clicking his or her tongue. Still further examples include producing vocal sounds, subvocalizations, or other sounds. “Subvocalizations,” as used herein, refers to vocal sounds that are below a level at which humans can typically hear, which is typically around 0 decibels. Input actions may further include a user moving a body part, such as moving (e.g., shaking) his or her head, moving his or her hands, arms, legs, and so on. Input actions are not intended to be limited to the user interacting with his or her own body. For example, input actions may include a user contacting or otherwise interacting with another object, such as an inanimate object or another person.
In various embodiments, input actions cause or produce one or more signals to propagate through or within a human body (e.g., through-body signals), outside a human body, or some combination thereof. For example, performing input actions may cause an acoustic, vibration, or other type of signal to propagate through or within the user's body. Similarly, a user performing input actions may cause an optical, image, acoustic, or other type of signal to propagate outside the user's body. The embodiments described herein with respect to a user's body as the structure are applicable to other types of structures as well.
Different input actions may correspond to different inputs at the wearable audio device. For example, a user may swipe on his or her body to provide one type of input and tap on his or her body to provide another type of input. Continuing the example, a user may swipe on his or her body to control a volume of an audio output of the wearable device and/or the user may tap on his or her body to start or pause the audio output.
Some input actions may have a directional component; changing a direction of a gesture, or gesturing in different directions, may be interpreted by embodiments as different inputs. For example, the user may swipe up on his or her body to increase a volume of an audio output of the wearable device, swipe down to decrease the volume, swipe right to advance an audio track, and/or swipe left to repeat or change to a previous audio track.
The input actions may further include a location component, such that the same gesture or action in different locations yields different inputs. For example, a user may tap on a left side of his or her head to pause an audio output and tap on a right side of his or her head to advance an audio track of the audio output.
Wearable audio devices described herein may detect input actions in a variety of ways. One or more input devices of a wearable audio device may detect input actions by detecting the signals produced by input actions. For example, an input device such as a camera or microphone may receive a signal propagating outside of a user's body and generate a corresponding input signal. As another example, an input device such as a microphone or a vibration sensor that is coupled to a user's ear canal, may receive a signal propagating through or within the user's body and generate a corresponding input signal. As used herein, a signal detected “through” or “within” a human body (e.g., a user's body) or other structure refers to a signal that is propagating or has propagated through or within the human body at the time it is detected, and may be referred to as a “through-body signal.”
In various embodiments, the input devices may include any suitable components for detecting inputs. Examples of input devices include audio sensors (e.g., microphones), optical or visual sensors (e.g., cameras, visible light sensors, invisible light sensors), proximity sensors, touch sensors, force sensors, mechanical devices (e.g., switches, buttons, keys), vibration sensors, orientation sensors, motion sensors (e.g., accelerometers, velocity sensors), location sensors (e.g., GPS devices), thermal sensors, communication devices (e.g., wired or wireless communication devices), resistive sensors, magnetic sensors, electroactive polymers (EAPs), strain gauges, and so on. or some combination thereof. Each input device may be configured to detect one or more particular types of input and provide an output corresponding to the detected input, for example to a processing unit.
The wearable audio device may include output devices for providing haptic, visual, and/or audio outputs. As described above, the outputs may be generated and/or manipulated based on the inputs detected at the input devices. The outputs provided by the output devices may also be responsive to, or initiated by, a program or application executed by a processing unit of the wearable audio device and/or an associated companion device. The output devices may include any suitable components for providing outputs. Examples of output devices include audio output devices (e.g., speakers), visual output devices (e.g., lights, displays), tactile output devices (e.g., haptic output devices), communication devices (e.g., wired or wireless communication devices), or some combination thereof. Each output device may be configured to receive one or more instructions (e.g., signals), for example from the processing unit, and provide an output corresponding to the instructions.
A speaker or other audio output device of the wearable audio device may provide an audio output through the sealed passage and to the ear canal. The audio output may include music, voice communications, instructions, sounds, alerts and so forth that may be initiated or controlled by a processing unit of the wearable audio device and/or an associated companion device, as described herein. The audio output may be responsive to various types of inputs, including through-body inputs, external inputs, touch and gesture inputs and physical manipulations of controls or other tactile structures. For example, an audio output of the wearable audio device may change from a first mode to a second mode in response to detecting a signal that was generated by an input action.
Reference will now be made to the accompanying drawings, which assist in illustrating various features of the present disclosure. The following description is presented for purposes of illustration and description. Furthermore, the description is not intended to limit the inventive aspects to the forms disclosed herein. Consequently, variations and modifications commensurate with the following teachings, and skill and knowledge of the relevant art, are within the scope of the present inventive aspects.
In various embodiments, the input devices 110 may include any suitable components for detecting inputs. Examples of input devices 110 include audio input devices (e.g., microphones), optical or visual sensors (e.g., cameras, visible light sensors, invisible light sensors), proximity sensors, touch sensors, force sensors, mechanical devices (e.g., switches, buttons, keys), vibration sensors, orientation sensors, motion sensors (e.g., accelerometers, velocity sensors), location sensors (e.g., GPS devices), thermal sensors, communication devices (e.g., wired or wireless communication devices), resistive sensors, magnetic sensors, electroactive polymers (EAPs), strain gauges, and so on. or some combination thereof. Each input device 110 may be configured to detect one or more particular types of input and provide a detection output corresponding to the detected input, for example to the processing unit 150.
The output devices 140 may include any suitable components for providing outputs. Examples of output devices 140 include audio output devices (e.g., speakers), visual output devices (e.g., lights, displays), tactile output devices (e.g., haptic output devices), communication devices (e.g., wired or wireless communication devices), or some combination thereof. Each output device 140 may be configured to receive one or more instructions (e.g., signals), for example from the processing unit 150, and provide an output corresponding to the instructions.
The processing unit 150 is operably coupled to the input devices 110 and the output devices 140. As used herein, “operably coupled” means coupled in any suitable manner for operation, including wiredly, wirelessly, or some combination thereof. The processing unit 150 is adapted to communicate with the input devices 110 and the output devices 140. For example, the processing unit 150 may receive an output from an input device 110 that corresponds to a signal detected by the input device. The processing unit 150 may interpret the output from the input device 110 to determine whether the signal was generated by an input (e.g., an input action) and whether to provide and/or change one or more outputs of the wearable audio device 100 in response the input. The processing unit 150 may then send instructions to one or more output devices 140 to provide and/or change outputs as appropriate. The processing unit 150 may include one or more computer processors or microcontrollers that are configured to perform operations in response to computer-readable instructions. Examples of suitable processing units are discussed in more detail below with respect to
As discussed herein, it is recognized that a signal detected by an input device 110, the detection output of the input device 110 provided in response to detecting the signal, and further transmissions of the signal contents to additional device components and/or devices may not strictly be the same signal. However, for ease of discussion, the use of the term “signal” herein refers to the original signal as well as the contents of the signal as they are transmitted to various components in various media and take on various forms. Similarly, the use of the term “output” herein refers to an output signal of a device as well as the contents of the output as they are transmitted to various components in various media and take on various forms.
As discussed above, in some embodiments, the input devices 110 include one or more microphones used to detect audio input. The audio input may include voice commands, vibrations, bodily noises, ambient noise, or other acoustic signals. In some cases, the wearable audio device 100 may have one or more dedicated microphones that are configured to detect particular types of audio input. For example, the wearable audio device 100 may include a first microphone, such as a beamforming microphone, that is configured to detect voice commands from a user, a second microphone that is configured to detect ambient noise, and a third microphone that is configured to detect acoustic signals or vibrations from a user's body (such as that produced by a facial tap or other gesture).
The processing unit 150 may receive a detection output from each microphone and distinguish between the various types of inputs. For example, the processing unit 150 may identify a detection output from the microphone(s) associated with an input (e.g., a voice command, a facial tap, and so on) and initiate a signal that is used to control a corresponding function of the wearable audio device 100, such as an output provided by an output device 140. The processing unit 150 may also identify signals from the microphone(s) associated with an ambient condition and ignore the signal and/or use the signal to control an audio output of the wearable audio device 100 (e.g., a speaker), such as acoustically cancelling or mitigating the effects of ambient noise.
One or more input devices 110 may operate to detect a location of an object or body part of a user relative to the wearable audio device 100. This may also include detecting gestures, patterns of motion, signs, finger or hand positions, or the like. To facilitate the foregoing, the wearable audio device 100 may include a capacitive sensor that is configured to detect a change in capacitance between an electrode of the sensor and a user. As the user approaches the sensor, the capacitance changes, and thus may be used to determine a distance of the user relative to the electrode. In this manner, multiple capacitive sensors may be used to track a location or position of a body part of the user along an exterior surface of the wearable audio device 100.
In some cases, the capacitive sensor may also be used to measure or detect a force input on an exterior surface of the wearable audio device 100. For example, the user may press the exterior surface and deform the surface toward the electrode of the sensor. The surface may deform by a known amount for a given force, and thus a force applied by the user to the surface may be determined based on the positioned of the user derived from the change in capacitance.
As discussed above, the wearable audio device 100 may also include one or more visual or optical sensors. The optical sensors may, in certain embodiments, measure an intensity of light at one or more locations on the exterior surface of the wearable audio device 100. A decrease in the intensity of light at a particular location may be associated with a user input or gestures, such as a cupping gesture over the wearable audio device 100. A lens or protective window of the optical sensor may be camouflaged from a surrounding surface of the wearable audio device 100, for example, using an optical coating, which may match the surrounding surface but be translucent to certain wavelengths of light. In other embodiments, the optical sensor may be, or form a component of, a camera or camera system. This may allow the wearable audio device 100 to detect and recognize specific types of gestures using pattern recognition.
Optical sensors, in certain embodiments, may also be used to detect a location of the wearable audio device 100. For example, an optical sensor may be positioned relative to a portion of the wearable audio device 100 configured to be worn in a user's ear. This may allow the optical sensor to detect a receipt of the wearable audio device 100 within a person ear (e.g., in response to a decrease in light intensity measured at the sensor).
The input devices 110 may also include one or more mechanical devices or tactile structures that are configured to receive physical input or manipulations. Physical manipulations may include a squeeze, a collapse, a roll or rotation, a jog, a press, a pull, and so on. In some cases, the physical input may manipulate the mechanical device or tactile structure and cause the mechanical device or tactile structure to physically complete a switch or circuit that triggers a switch event. In other cases, the physical manipulation of the tactile structure is detected or recognized by substantially non-contact types of sensors or switches of the wearable audio device 100, such as an optical reader detecting the rotation of a wheel, and so on. The mechanical device or tactile structure may therefore take various forms, including a textured exterior surface, a multi-input button or dome, a wheel, a crown, and so on.
The wearable audio device 100 may include various other components and sensors that are configured to detect input. In one embodiment, the wearable audio device 100 may include an antenna that is configured to communicatively or wirelessly couple the wearable audio device 100 to another device, such as the companion device 170 described below with respect to
As a further example, the input devices 110 may include a thermal sensor to detect the placement of the wearable audio device 100 within a user's ear. Accelerometers and speed sensors may be used to detect changing conditions, for example, when the wearable audio device 100 is used or otherwise worn by a user driving an automobile. In other cases, other combinations of sensors and associated functionalities are possible and contemplated herein.
As described above, an input device 110 may initiate or provide a signal corresponding to an input detected at the input device. The signal may be provided to the processing unit 150 and used to control one or more outputs of the wearable audio device 100. In this regard, the wearable audio device 100 may include various output devices 140 in order to provide outputs and alter or manipulate the outputs based on detected inputs.
The output devices 140 may include one or more audio output devices, such as speakers, configured to produce an audio output, such as various types of music, voice communications, instructions, sounds, alerts, other acoustic signals, or some combination thereof. In some embodiments, the speakers have a relatively small form factor corresponding to that of the wearable audio device 100 so that the speakers may be disposed within an enclosure of the wearable audio device 100. For example, the speaker may generally have a maximum dimension within a range of several millimeters, however other dimensions are possible. Notwithstanding, the speaker may be configured to provide substantially high-resolution audio output to a user. This may be facilitated by the various components (e.g., sealing component 322 of
Audio outputs may be configured to change in response to inputs received at the wearable audio device 100. For example, the processing unit 150 may be configured to change the audio output provided by a speaker in response to an input corresponding to a gesture input, physical manipulation, voice command, and so on. The speaker may thus receive multiple distinct signals from the processing unit 150 corresponding to different types of input or otherwise corresponding to distinct functions. To illustrate, a first signal corresponding to a first gesture input may cause the processing unit 150 to alter the audio output in a first manner (e.g., such as increasing playback volume in response to an up swipe), and a second signal corresponding to a second gesture input may cause the processing unit 150 to alter the audio output in a second manner (e.g., such as decreasing playback volume in response to a down swipe), among other possibilities.
The output devices 140 may include one or more tactile output devices configured to produce a tactile or haptic output. Haptic outputs may be facilitated by a haptic feedback structure, such as a dome, electromechanical actuator, and so forth. The output devices 140 may include one or more tactile structures to provide a tactile indication of, for example, the receipt of input by the wearable audio device 100. This may include a buckling of a collapsible dome, or other deformation of a structure that registers input in response to a physical manipulation. Additionally or alternatively, a tactile structure may visually and/or tactilely indicate a region of the wearable audio device 100 operable to receive input. For example, a textured surface may provide a tactile output to a user as the user feels the changing contours of the surface.
The output devices 140 may include one or more visual output devices configured to illuminate or otherwise visually alter a portion of the wearable audio device 100, such as an exterior surface. Various lights or visual indicators may be used to produce a visual output of the wearable audio device 100. The visual output may be indicative of an operational status of the wearable audio device 100. For example, the visual output may include certain colors that represent a power-on mode, a standby mode, a companion-device pairing mode, a maintenance mode, and so on.
Visual output may also be used to indicate a receipt of input by the wearable audio device 100. As one possibility, visual indicators along a surface of the wearable audio device 100 may produce a momentary flash, change colors, and so on, in response to received inputs. In this regard, the visual output may be responsive or adaptable to the various different types of input detected or that otherwise correspond to distinct functions of the wearable audio device 100. This may include producing a first visual output (e.g., a first color, animation, or sequence) in response to a first input (audio, gesture, mechanical, and so forth) and producing a second visual output (e.g., second color, animation, or sequence) in response to a second input (audio, gesture, mechanical, and so forth).
Additional or alternative output devices 140 may generally be configured to produce other types of output, including but not limited to, thermal outputs, pressure outputs, outputs for communication to external or companion devices, and so on. In one embodiment, the wearable audio device 100 may include an antenna that is configured to communicatively or wirelessly couple the wearable audio device 100 to another device, such as the companion device 170 described below with respect to
The input devices 110 and the output devices 140 described with respect to
The wearable audio device 100 and the companion device 170 may be communicatively coupled via a wireless connection. For example, the wearable audio device 100 may be paired with the companion device 170 using a short range wireless interconnection; however, other wireless connection techniques and protocols may be used. In other embodiments, the wearable audio device 100 and the companion device 170 may be connected via a wired connection.
For purposes of illustration, the companion device 170 includes at least a context module 175, an input module 180, and an output module 185. Broadly, the context module 175 may be configured to provide an operational context to the wearable audio device 100. An operational context may be, or include, information associated with an application or program executed on the companion device 170 (e.g., such as an application executed by the processing unit 150). The operational context may therefore be used by the wearable audio device 100 to provide an output, such as a music output (where the executed program is an audio file), a voice communication output (where the executed program is a telephone call), an audio notification output (where the executed program is a navigation application), among various other possibilities.
The operational context may also be used by the wearable audio device 100 to determine or activate a particular type of input or sensor. For example, different types of gestures, audio input, physical manipulations and so forth may be registered as input (or ignored) based on the operational context. To illustrate, where the operational context causes the wearable audio device 100 to output music, the processing unit 150 may be configured to control the music based on a direction of motion of different types of input. In another mode, where the operational context causes the wearable audio device 100 to output voice communications, the processing unit 150 may be configured to control the music based on a physical manipulation of a tactile structure (and ignore gesture inputs), among various other possibilities.
With reference to the input module 180, the companion device 170 may be configured to receive input using various different sensors and structures. For example, the companion device 170 may include mechanical buttons, keyboards, touch-sensitive surfaces, trackpads, microphones, and other sensors. The input detected by the input module 180 may be used to control an output of the wearable audio device 100. As one example, an audio playback volume may be increased or decreased in response to a manipulation of one or more mechanical keys or buttons of the companion device 170. The input detected by the input module 180 may also be used to control a mode of the wearable audio device 100, such as a mode for detecting certain audio inputs. For example, the wearable audio device 100 may be configured to enter a mode in which audio input is used to control a function of the wearable audio device 100 and/or the companion device 170.
With reference to the output module 185, the companion device 170 may be configured to provide output using various different components and structures. For example, the companion device 170 may include speakers, a display, tactile structures, and other components. The output provided by the output module 185 may be responsive to input detected by the wearable audio device 100. As one example, in response to a detection of input at the wearable audio device 100, a graphic may be depicted at a display of the companion device 170, or, likewise, a sound may be produced at a speaker of the companion device 170. The output module 185, more generally, may also be used to indicate a status of the wearable audio device 100 to a user. For example, the output module 185 may produce an output, visual or otherwise, corresponding to different modes of the wearable audio device 100, including a power-on mode, a standby mode, a battery status level, among various other indications.
The wearable audio devices 200 may be worn by the user 250, as shown in
In various embodiments, the wearable audio devices 200a and 200b may be communicably coupled. For example, the input devices and output devices of the wearable audio devices 200a and 200b may include communication devices configured to communicably couple the wearable audio devices 200a and 200b.
The wearable audio device 200 may also detect input as described above with respect to
To facilitate the foregoing,
The enclosure 310 may be coupled to the sealing component 322. The sealing component 322 may be fitted or positioned around a side of the enclosure 310. For example, the enclosure 310 may define a speaker opening, and the sealing component 322 may be positioned around this opening. The sealing component 322 may be configured to engage a structure, such as the ear 360 of the user 350. For example, the sealing component 322 may be positioned in an opening of a structure, such as the ear canal of the ear 360. The sealing component 322 may include a conformable surface 324 that may be pressed into the opening of the structure and engage with the structure at the opening, such as the ear surface 364. In some cases, the sealing component 322 being positioned in the opening may form or define a substantially sealed interior volume within the structure. The sealed interior volume may be substantially vacant. In one embodiment, the substantially sealed interior volume is formed by the sealing component 322 and the ear canal of the user.
In some embodiments, the sealing component couples one or more components of the wearable audio device with the interior volume and/or the structure. For example, the ear canal of the user 350 or another portion of the user's body may be coupled to input devices and/or output devices of the wearable audio device 300, as shown and described below with respect to
The sealing component 322 may be formed from a variety of materials, including elastically deformable materials, such as silicon, rubber, nylon, and various other synthetic or composite materials. The sealing component 322 may, in some embodiments, be removable from the enclosure 310 by the user 350, therefore allowing the user 350 to interchange various different sealing components with the enclosure 310 of the wearable audio device 300 based on user customizable preferences. In some embodiments, the sealing component 322 is integrated with the enclosure 310, meaning that the sealing component 322 is a part of the enclosure 310 and/or the sealing component 322 and the enclosure 310 form a common structure.
As described herein, the sealing component 322 may be used to form a sealed passage between various internal components of the wearable audio device 300 and the ear canal 384 of the user 350.
As described above, the input devices of the wearable audio device detect inputs from users, other devices, and other sources. In various embodiments, inputs are detected without a user directly interacting with (e.g., physically touching) the wearable audio device. Inputs may not require any action by a user, but instead be initiated by other devices or other sources. In various embodiments described herein, users may perform input actions to provide inputs to the wearable audio device. As used herein, an “input action” refers to any action, condition, or the like that can be detected by a wearable audio device and interpreted by the wearable audio device as an input. In various embodiments, one or more input actions may correspond to inputs at the wearable audio device.
In some embodiments, users perform input actions by interacting with the structure in which the wearable audio device is disposed. In some embodiments, a user may contact (e.g., tap, swipe, press, or otherwise contact) the structure. For example, the user may contact an exterior surface of his or her body, such as the skin on his or her face. Further examples of input actions include a user clicking his or her teeth together or clicking his or her tongue. Still further examples include producing vocal sounds, subvocalizations, or other sounds. “Subvocalizations,” as used herein, refers to vocal sounds that are below a level at which humans can typically hear, which is typically around 0 decibels. Input actions may further include a user moving a body part, such as moving (e.g., shaking) his or her head, moving his or her hands, arms, legs, and so on. Input actions are not intended to be limited to the user interacting with his or her own body. For example, input actions may include a user contacting or otherwise interacting with another object, such as an inanimate object or another person.
In some embodiments, different input actions correspond to different inputs at the wearable audio device 300. An input action may be a force exerted on a particular part or location of a structure (such as a human body), for a particular time, and/or in a particular direction. Put another way, the input may be a gesture performed on a body part, such as the head, cheek, chin, forehead, and so on; this gesture may be detected by the wearable audio device 300 and used to adjust an output, operating condition, or the like of the device.
For example, a user may swipe on his or her body to provide one type of input and tap on his or her body to provide another type of input. For example, a user may swipe on his or her body to control a volume of an audio output of the wearable device and/or the user may tap on his or her body to start or pause the audio output. The input actions may have a directional component that corresponds to different inputs. For example, the user may swipe up on his or her body to increase a volume of an audio output of the wearable device, swipe down to decrease the volume, swipe right to advance an audio track, and/or swipe left to repeat or change to a previous audio track. The input actions may further include a location component that corresponds to different inputs. For example, a user may tap on a left side of his or her head to pause an audio output and tap on a right side of his or her head to advance an audio track of the audio output.
In various embodiments, input actions cause or produce one or more signals to propagate through or within a human body (e.g., the user's body), outside the human body, or some combination thereof. For example, performing input actions may cause an acoustic, vibration, or other type of signal to propagate through or within the user's body. Similarly, a user performing input actions may cause an optical, image, acoustic, or other type of signal to propagate outside the user's body. As described above, the wearable audio devices described herein may be positioned in a structure besides a human ear or human body. For example, the wearable audio device may be positioned in an opening in a structure and may form a substantially sealed volume within the structure. Input actions may cause or produce one or more signals to propagate through or within the structure, outside the structure, or some combination thereof. The embodiments described herein with respect to a user's body as the structure are applicable to other types of structures as well.
The wearable audio devices described herein may detect input actions in a variety of ways. One or more input devices of a wearable audio device may detect input actions by detecting the signals produced by input actions. For example, an input device such as a camera or microphone may receive a signal outside of a user's body that was generated by an input action. As another example, an input device such as a microphone or a vibration sensor coupled to a user's ear canal, may receive a signal through or within the user's body that was generated by an input action. As used herein, a signal detected through or within a user's body refers to a signal that is propagating or has propagated through or within the user's body at the time it is detected. Detecting input actions is discussed in more detail below with respect to
As described above, an output or function of the wearable audio device may be changed in response to detecting signals that correspond to input actions. For example, an audio output of the wearable audio device may change from a first mode to a second mode in response to detecting a signal that was generated by an input action.
The wearable audio device 500 includes functionality for detecting input actions. The wearable audio device 500 includes one or more input devices as discussed above with respect to
Turning to
The sealed passage 630 may couple the ear canal 684 with one or more through-body input devices of the wearable audio device 600A. For example,
The through-body input device 690A may receive signals from the ear canal 684 and/or other parts of the user's body. For example, the through-body input device 690A may receive signals that propagate through or within the ear canal 684, one or more other parts of the user's body, or some combination thereof. In various embodiments, the signals are inputs for the through-body input device 690A. In some embodiments, the through-body input device 690A includes a microphone that is configured to detect acoustic signals propagating through or within the ear canal 684, one or more other parts of the user's body, or some combination thereof.
For example, an input action corresponding to a user contacting his or her own body may cause an acoustic signal to propagate through or within the user's body, and a microphone coupled to the user's ear canal 684 or otherwise coupled to the user's body may detect this acoustic signal. The processing unit 612 of the wearable audio device 600A (or another processing unit of an associated electronic device) may analyze or otherwise process this signal to determine that it was generated by an input action. In other embodiments, the input device includes a vibration sensor and is configured to detect vibration signals propagating through or within the user's body, the ear canal 684, or some combination thereof. In still other embodiments, the through-body input device 690A includes another type of input device, such as those discussed above with respect to
The wearable audio device 600A may further include one or more external input devices for receiving signals outside the user's body. For example, external input device 691A is positioned in the enclosure 310 and is configured to detect signals propagating outside the user's body. In some embodiments, the external input device 691A includes a microphone that is configured to detect acoustic signals propagating outside the user's body. In some embodiments, the external input device 691A includes a camera that is configured to capture images external to a user's body. For example, a camera may be positioned such that it can capture images of the user contacting his or her body, such as contacting his or her head as shown in
In some embodiments, the external input device 691A includes an optical sensor that is configured to detect optical signals. In still other embodiments, the external input device 691A includes another type of input device, such as those discussed above with respect to
Additionally or alternatively, the sealed passage 630 may allow the wearable audio device 600A to transmit audio output into the ear canal 684, while mitigating or preventing the audio output from being released into a surrounding environment, as discussed above with respect to
The sealed passage 632 may couple the user's body (e.g., a structure) with one or more input devices of the wearable audio device 600B. For example,
The input device 690D is positioned such that it is coupled to a portion 686 of the body of the user. In some embodiments, the input device 690B is directly coupled to the body of the user such that the input device may detect signals propagating through the body. In various embodiments, the input device 690D is coupled to the ear canal 684 through or within the user's body. The input device 690D may receive signals from the ear canal 684 and/or other parts of the user's body. The input devices 690C and 690D may be operably coupled to additional components of the wearable audio device 600C by a wired or wireless connector, for example.
In another embodiment, the input device(s) 690C, 690D may include multiple components disposed at multiple locations of the wearable audio device 600. For example, an input device may include a sensing element disposed at a distal end of the sealing component (similar to the input devices 690C and 609D of
In some instances, an input action may cause multiple signals to propagate through or within the user's body, outside the user's body, and/or some combination thereof. In some embodiments, an input device may detect multiple signals generated by an input action. For example, a through-body input device may detect multiple vibration or acoustic signals generated by an input action. As another example, an external input device may detect multiple acoustic, image, or optical signals generated by an input action. In embodiments with multiple input devices, each input device may detect all or a subset of the signals generated by an input action. For example, an external input device may detect one or more acoustic, image, and/or optical signals generated by an input action and a through-body input device may detect one or more acoustic or vibration signals corresponding to the same input action.
In some embodiments, multiple input devices from a wearable audio device detect one or more signals generated by an input action. For example, multiple input devices that include any combination of through-body input devices and/or external input devices may detect a signal caused by an input action.
In some embodiments, multiple input devices disposed in two or more wearable audio devices may detect one or more signals generated by an input action. For example, a user may have a first wearable audio device disposed in a first ear and a second wearable audio device disposed in a second ear (e.g., wearable audio devices 200a and 200b of
In various embodiments in which multiple input devices detect one or more signals generated by an input action and/or a single input device detects multiple signals generated by an input action, a processing device of the wearable audio device (e.g., processing device 150 of
Multiple detected signals and/or a signal detected by multiple input devices may be used to determine additional information about an input action. In some embodiments, the processing unit may determine an estimated location of an input action by analyzing differences between a signal received at multiple input devices. For example, the processing unit may determine a time delay between a time a signal is received at a first input device and a time the signal is received at a second input device and determine an estimated location of the input action based on the time delay. In embodiments in which signals detected by multiple wearable audio devices are processed, the signals may be processed at one or more of the multiple wearable audio devices and/or a connected device, as discussed in more detail below with respect to
At operation 702, the input device detects a signal propagating through or within the structure. At operation 704, the input device generates a detection output in response to detecting the signal. The processing unit receives the detection output. In operation 706, the processing unit determines, based on the detection output, whether the signal was generated by an input action at the structure. Determining whether the signal was generated by an input action may include analyzing the detection output in a variety of ways, including signal recognition (e.g., audio fingerprinting, image recognition, machine learning, and so on).
If the processing unit determines that the signal was generated by an input action at the structure, the process proceeds to operation 708. If the processing unit does not determine that the signal was generated by an input action at the structure, the process may return to step 702. In some embodiments, if the processing unit cannot determine whether the signal was generated by an input action at the structure or not, the processing unit may analyze one or more additional signals to make a determination. For example, the processing unit may analyze one or more detection outputs from other input devices and/or one or more different signals from the same input device to determine whether the detection outputs correspond to signals generated by an input action.
At operation 708, the processing unit changes the operation of the wearable audio device in accordance with the input action. Changing the operation of the wearable audio device may include modifying, initiating, and/or ceasing one or more outputs of the wearable audio device, executing functions on the wearable audio device, or the like. For example, an audio output of the wearable audio device may change from a first mode to a second mode in response to detecting a signal that was generated by an input action. Modes of the audio output may correspond to different types of audio output (e.g., phone calls, music, and the like), different tracks (e.g., songs), different volume levels, and the like.
As discussed above, different input actions may correspond to different changes at the wearable audio device. The mappings of input actions to changes may be stored at the wearable audio device and/or a companion device and may be user-editable.
In operation 810, the processing unit(s) determine, based on the first and/or second detection outputs, that the signal was generated by an input action on an external surface of the structure. At operation 812, the processing unit(s) determine, based on the first and second outputs, an estimated location of the input action. For example, if the input action is a tap on a user's face, the processing unit(s) may determine an approximate location of the tap. In some embodiments, the processing unit may determine an estimated location of an input action by analyzing differences between a signal received at multiple input devices. For example, the processing unit may determine a time delay between a time a signal is received at a first input device and a time the signal is received at a second input device and determine an estimated location of the input action based on the time delay. In some embodiments, determining the location includes determining a side of the head that the input action occurred. Input actions at different locations may correspond to different operations at the wearable audio devices.
In embodiments in which one or more signals detected by multiple wearable audio devices are processed, the signals may be processed at one or more of the multiple wearable audio devices and/or a connected device. As described above, the wearable audio devices may include a communication device configured to communicate with one or more additional devices, including other wearable audio devices. In one embodiment, the signals (or outputs from components of the wearable audio device that correspond to the signals) are transmitted to a processing unit of one of the multiple wearable audio devices or another device. The processing unit may analyze the signals or outputs and determine whether they correspond to an input action and/or whether outputs and/or functions of any of the wearable audio devices should be changed or executed in response to the signal or output. The device where the processing occurs may transmit instructions to the other wearable audio devices to adjust their output or function.
As shown in
The memory 912 may include a variety of types of non-transitory computer-readable storage media, including, for example, read access memory (RAM), read-only memory (ROM), erasable programmable memory (e.g., EPROM and EEPROM), or flash memory. The memory 912 is configured to store computer-readable instructions, sensor values, and other persistent software elements. Computer-readable media 916 may also include a variety of types of non-transitory computer-readable storage media, including, for example, a hard-drive storage device, a solid-state storage device, a portable magnetic storage device, or other similar device. The computer-readable media 916 may also be configured to store computer-readable instructions, sensor values, and other persistent software elements.
In this example, the processing unit 908 is operable to read computer-readable instructions stored on the memory 912 and/or computer-readable media 916. The computer-readable instructions may adapt the processing unit 908 to perform the operations or functions described above with respect to
As shown in
The electronic device 900 may also include a battery 924 that is configured to provide electrical power to the components of the electronic device 900. The battery 924 may include one or more power storage cells that are linked together to provide an internal supply of electrical power. In this regard, the battery 924 may be a component of a power source 928 (e.g., including a charging system or other circuitry that supplies electrical power to components of the electronic device 900). The battery 924 may be operatively coupled to power management circuitry that is configured to provide appropriate voltage and power levels for individual components or groups of components within the electronic device 900. The battery 924, via power management circuitry, may be configured to receive power from an external source, such as an AC power outlet or interconnected computing device. The battery 924 may store received power so that the electronic device 900 may operate without connection to an external power source for an extended period of time, which may range from several hours to several days.
The electronic device 900 may also include one or more sensors 940 that may be used to detect a touch and/or force input, environmental condition, orientation, position, or some other aspect of the electronic device 900. For example, sensors 940 that may be included in the electronic device 900 may include, without limitation, one or more accelerometers, gyrometers, inclinometers, or magnetometers. The sensors 940 may also include one or more proximity sensors, such as a magnetic hall-effect sensor, inductive sensor, capacitive sensor, continuity sensor, or the like.
The sensors 940 may also be broadly defined to include wireless positioning devices including, without limitation, global positioning system (GPS) circuitry, Wi-Fi circuitry, cellular communication circuitry, and the like. The electronic device 900 may also include one or more optical sensors, including, without limitation, photodetectors, photo sensors, image sensors, infrared sensors, or the like. In one example, the sensor 940 may be an image sensor that detects a degree to which an ambient image matches a stored image. As such, the sensor 940 may be used to identify a user of the electronic device 900. The sensors 940 may also include one or more acoustic elements, such as a microphone used alone or in combination with a speaker element. The sensors 940 may also include a temperature sensor, barometer, pressure sensor, altimeter, moisture sensor or other similar environmental sensor. The sensors 940 may also include a light sensor that detects an ambient light condition of the electronic device 900.
The sensor 940, either alone or in combination, may generally be a motion sensor that is configured to estimate an orientation, position, and/or movement of the electronic device 900. For example, the sensor 940 may include one or more motion sensors, including, for example, one or more accelerometers, gyrometers, magnetometers, optical sensors, or the like to detect motion. The sensors 940 may also be configured to estimate one or more environmental conditions, such as temperature, air pressure, humidity, and so on. The sensors 940, either alone or in combination with other input, may be configured to estimate a property of a supporting surface, including, without limitation, a material property, surface property, friction property, or the like.
The electronic device 900 may also include a camera 932 that is configured to capture a digital image or other optical data. The camera 932 may include a charge-coupled device, complementary metal oxide (CMOS) device, or other device configured to convert light into electrical signals. The camera 932 may also include one or more light sources, such as a strobe, flash, or other light-emitting device. As discussed above, the camera 932 may be generally categorized as a sensor for detecting optical conditions and/or objects in the proximity of the electronic device 900. However, the camera 932 may also be used to create photorealistic images that may be stored in an electronic format, such as JPG, GIF, TIFF, PNG, raw image file, or other similar file types.
The electronic device 900 may also include a communication port 944 that is configured to transmit and/or receive signals or electrical communication from an external or separate device. The communication port 944 may be configured to couple to an external device via a cable, adaptor, or other type of electrical connector. In some embodiments, the communication port 944 may be used to couple the electronic device 900 with a computing device and/or other appropriate accessories configured to send and/or receive electrical signals. The communication port 944 may be configured to receive identifying information from an external accessory, which may be used to determine a mounting or support configuration. For example, the communication port 944 may be used to determine that the electronic device 900 is coupled to a mounting accessory, such as a particular type of stand or support structure.
Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Further, the term “exemplary” does not mean that the described example is preferred or better than other examples.
The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.
This application is a non-provisional patent application of and claims the benefit of U.S. Provisional Patent Application No. 62/683,571, filed Jun. 11, 2018 and titled “Detecting Through-Body Inputs at a Wearable Audio Device,” the disclosure of which is hereby incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
1276708 | Blair | Aug 1918 | A |
1646628 | Nolen | Oct 1927 | A |
1893291 | Kwartin | Jan 1933 | A |
1992605 | Clifford et al. | Feb 1935 | A |
2325688 | Landis | Jul 1943 | A |
2779095 | Hottenroth, Jr. | Jan 1957 | A |
3414689 | Gummel et al. | Dec 1968 | A |
3866299 | Gregg et al. | Feb 1975 | A |
4068103 | King et al. | Jan 1978 | A |
4081631 | Feder | Mar 1978 | A |
4089576 | Barchet | May 1978 | A |
4095411 | Kondo | Jun 1978 | A |
4132437 | Green | Jan 1979 | A |
4245642 | Skubitz et al. | Jan 1981 | A |
4466441 | Skubitz et al. | Aug 1984 | A |
4658425 | Julstrom | Apr 1987 | A |
5106318 | Endo et al. | Apr 1992 | A |
5293002 | Grenet et al. | Mar 1994 | A |
5335011 | Addeo et al. | Aug 1994 | A |
5341433 | Meyer et al. | Aug 1994 | A |
5406038 | Reiff et al. | Apr 1995 | A |
5521886 | Hirosawa et al. | May 1996 | A |
5570324 | Geil | Oct 1996 | A |
5604329 | Kressner et al. | Feb 1997 | A |
5619583 | Page et al. | Apr 1997 | A |
5733153 | Takahashi et al. | Mar 1998 | A |
5879598 | McGrane | Mar 1999 | A |
5958203 | Parce et al. | Sep 1999 | A |
5960366 | Duwaer | Sep 1999 | A |
6036554 | Koeda et al. | Mar 2000 | A |
6073033 | Campo | Jun 2000 | A |
6129582 | Wilhite et al. | Oct 2000 | A |
6151401 | Annaratone | Nov 2000 | A |
6154551 | Frenkel | Nov 2000 | A |
6191796 | Tarr | Feb 2001 | B1 |
6192253 | Charlier et al. | Feb 2001 | B1 |
6317237 | Nakao et al. | Nov 2001 | B1 |
6370005 | Sun et al. | Apr 2002 | B1 |
6400825 | Miyamoto et al. | Jun 2002 | B1 |
6516077 | Yamaguchi et al. | Feb 2003 | B1 |
6553126 | Han et al. | Apr 2003 | B2 |
6700987 | Kuze et al. | Mar 2004 | B2 |
6754359 | Svean et al. | Jun 2004 | B1 |
6813218 | Antonelli et al. | Nov 2004 | B1 |
6829018 | Lin et al. | Dec 2004 | B2 |
6882335 | Saarinen | Apr 2005 | B2 |
6892850 | Suzuki et al. | May 2005 | B2 |
6924792 | Jessop | Aug 2005 | B1 |
6934394 | Anderson | Aug 2005 | B1 |
6942771 | Kayyem | Sep 2005 | B1 |
7003099 | Zhang et al. | Feb 2006 | B1 |
7059932 | Tobias et al. | Jun 2006 | B1 |
7082322 | Harano | Jul 2006 | B2 |
7116795 | Tuason et al. | Oct 2006 | B2 |
7154526 | Foote et al. | Dec 2006 | B2 |
7158647 | Azima et al. | Jan 2007 | B2 |
7181030 | Rasmussen et al. | Feb 2007 | B2 |
7263373 | Mattisson | Aug 2007 | B2 |
7266189 | Day | Sep 2007 | B1 |
7362877 | Honda et al. | Apr 2008 | B2 |
7378963 | Begault et al. | May 2008 | B1 |
7414922 | Ferri et al. | Aug 2008 | B2 |
7527523 | Yohn et al. | May 2009 | B2 |
7536029 | Choi et al. | May 2009 | B2 |
7570772 | Sorensen et al. | Aug 2009 | B2 |
7679923 | Inagaki et al. | Mar 2010 | B2 |
7792320 | Proni | Sep 2010 | B2 |
7867001 | Ambo et al. | Jan 2011 | B2 |
7878869 | Murano et al. | Feb 2011 | B2 |
7903061 | Zhang et al. | Mar 2011 | B2 |
7912242 | Hikichi | Mar 2011 | B2 |
7966785 | Zadesky et al. | Jun 2011 | B2 |
8031853 | Bathurst et al. | Oct 2011 | B2 |
8055003 | Mittleman et al. | Nov 2011 | B2 |
8116505 | Kawasaki-Hedges et al. | Feb 2012 | B2 |
8116506 | Kuroda et al. | Feb 2012 | B2 |
8161890 | Wang | Apr 2012 | B2 |
8204266 | Munoz et al. | Jun 2012 | B2 |
8218397 | Chan | Jul 2012 | B2 |
8226446 | Kondo et al. | Jul 2012 | B2 |
8264777 | Skipor et al. | Sep 2012 | B2 |
8286319 | Stolle et al. | Oct 2012 | B2 |
8340312 | Johnson et al. | Dec 2012 | B2 |
8409417 | Wu | Apr 2013 | B2 |
8417298 | Mittleman et al. | Apr 2013 | B2 |
8447054 | Bharatan et al. | May 2013 | B2 |
8452037 | Filson et al. | May 2013 | B2 |
8488817 | Mittleman et al. | Jul 2013 | B2 |
8508908 | Jewell-Larsen | Aug 2013 | B2 |
8560309 | Pance et al. | Oct 2013 | B2 |
8574004 | Tarchinski et al. | Nov 2013 | B1 |
8620162 | Mittleman | Dec 2013 | B2 |
8632670 | Garimella et al. | Jan 2014 | B2 |
8644519 | Pance et al. | Feb 2014 | B2 |
8644533 | Burns | Feb 2014 | B2 |
8693698 | Carnes et al. | Apr 2014 | B2 |
8724841 | Bright et al. | May 2014 | B2 |
8804993 | Shukla et al. | Aug 2014 | B2 |
8811648 | Pance et al. | Aug 2014 | B2 |
8858271 | Yeung et al. | Oct 2014 | B2 |
8879761 | Johnson et al. | Nov 2014 | B2 |
8882547 | Asakuma et al. | Nov 2014 | B2 |
8983097 | Dehe et al. | Mar 2015 | B2 |
8989428 | Kwong | Mar 2015 | B2 |
9007871 | Armstrong-Muntner | Apr 2015 | B2 |
9066172 | Dix et al. | Jun 2015 | B2 |
9161434 | Merz et al. | Oct 2015 | B2 |
9227189 | Sobek et al. | Jan 2016 | B2 |
9229494 | Rayner | Jan 2016 | B2 |
9357299 | Kwong | May 2016 | B2 |
9380369 | Utterman et al. | Jun 2016 | B2 |
9386362 | Filson et al. | Jul 2016 | B2 |
9451354 | Zadesky et al. | Sep 2016 | B2 |
9497527 | Mittleman et al. | Nov 2016 | B2 |
9774941 | Grinker | Sep 2017 | B2 |
9820033 | Dix et al. | Nov 2017 | B2 |
9838811 | Pelosi | Dec 2017 | B2 |
9854345 | Briggs | Dec 2017 | B2 |
9857262 | Kil et al. | Jan 2018 | B2 |
9888309 | Prelogar et al. | Feb 2018 | B2 |
9900698 | Luzzato et al. | Feb 2018 | B2 |
10466047 | Ehman et al. | Nov 2019 | B2 |
20030087292 | Chen et al. | May 2003 | A1 |
20040203520 | Schirtzinger et al. | Oct 2004 | A1 |
20050009004 | Xu et al. | Jan 2005 | A1 |
20050271216 | Lashkari | Dec 2005 | A1 |
20060072248 | Watanabe et al. | Apr 2006 | A1 |
20060233411 | Utigard | Oct 2006 | A1 |
20070012827 | Fu et al. | Jan 2007 | A1 |
20080204379 | Perez-Noguera | Aug 2008 | A1 |
20080260188 | Kim | Oct 2008 | A1 |
20080292112 | Valenzuela et al. | Nov 2008 | A1 |
20080292126 | Sacha et al. | Nov 2008 | A1 |
20080310663 | Shirasaka et al. | Dec 2008 | A1 |
20090045005 | Byon et al. | Feb 2009 | A1 |
20110002487 | Panther et al. | Jan 2011 | A1 |
20110211724 | Hirata | Sep 2011 | A1 |
20130164999 | Ge et al. | Jun 2013 | A1 |
20130280965 | Kojyo | Oct 2013 | A1 |
20140250657 | Stanley et al. | Sep 2014 | A1 |
20150078611 | Boozer et al. | Mar 2015 | A1 |
20150310846 | Andersen | Oct 2015 | A1 |
20160150311 | Bremyer | May 2016 | A1 |
20160234585 | Filson et al. | Aug 2016 | A1 |
20160324478 | Goldstein | Nov 2016 | A1 |
20170026765 | Pelosi | Jan 2017 | A1 |
20170094386 | Trainer et al. | Mar 2017 | A1 |
20170180850 | Hsu et al. | Jun 2017 | A1 |
20170347179 | Masaki et al. | Nov 2017 | A1 |
20180035217 | Han | Feb 2018 | A1 |
20190094973 | Miller et al. | Mar 2019 | A1 |
Number | Date | Country |
---|---|---|
204104134 | Jan 2015 | CN |
2094032 | Aug 2009 | EP |
2310559 | Aug 1997 | GB |
2342802 | Apr 2000 | GB |
2102905 | Apr 1990 | JP |
2003319490 | Nov 2003 | JP |
2004153018 | May 2004 | JP |
2006297828 | Nov 2006 | JP |
WO03049494 | Jun 2003 | WO |
WO04025938 | Mar 2004 | WO |
WO2007083894 | Jul 2007 | WO |
WO08153639 | Dec 2008 | WO |
WO2009017280 | Feb 2009 | WO |
WO2011057346 | May 2011 | WO |
WO2011061483 | May 2011 | WO |
WO2016190957 | Dec 2016 | WO |
WO2018135849 | Jul 2018 | WO |
Entry |
---|
Valdes et al., “How Smart Watches Work,” https://electronics.howstuffworks.com/gadgets/clocks-watches/smart-watch2.htm, 10 pages, Apr. 2005. |
Baechtle et al., “Adjustable Audio Indicator,” IBM, 2 pages, Jul. 1, 1984. |
Blankenbach et al., “Bistable Electrowetting Displays,” https://spie.org/x43687.xml, 3 pages, Jan. 3, 2011. |
Enns, Neil, “Touchpad-Based Remote Control Devices,” University of Toronto, 1998. |
Pingali et al., “Audio-Visual Tracking for Natural Interactivity,” Bell Laboratories, Lucent Technologies, pp. 373-382, Oct. 1999. |
Zhou et al., “Electrostatic Graphene Loudspeaker,” Applied Physics Letters, vol. 102, No. 223109, 5 pages, Dec. 6, 2012. |
Number | Date | Country | |
---|---|---|---|
62683571 | Jun 2018 | US |