This disclosure generally relates to wearable audio devices. More particularly, the disclosure relates to demonstrating capabilities of wearable audio devices.
Modern wearable audio devices include various capabilities that can enhance the user experience. However, many of these capabilities go unrealized or under-utilized by the user or potential user due to inexperience with the device functions and/or lack of knowledge of the device capabilities.
All examples and features mentioned below can be combined in any technically possible way.
Various implementations include approaches for demonstrating wearable audio device capabilities. In certain cases, these approaches include initiating a demonstration to provide a user with an example of the wearable audio device capabilities.
In some particular aspects, a computer-implemented method of demonstrating a feature of a wearable audio device includes: receiving a command to initiate an audio demonstration mode at a demonstration device; initiating binaural playback of a demonstration audio file at a wearable playback device being worn by a user and initiating playback of a corresponding demonstration video file at a video interface coupled with the demonstration device; receiving a user command to adjust a demonstration setting to emulate adjustment of a corresponding setting on the wearable audio device; and adjusting the binaural playback at the wearable playback device based upon the user command.
In additional particular aspects, a demonstration device includes: a command interface for receiving a user command; a video interface for providing a video output; and a control circuit coupled with the command interface and the video interface, the control circuit configured to demonstrate a feature of a wearable audio device by performing actions including: receiving a command, at the command interface, to initiate an audio demonstration mode; initiating binaural playback of a demonstration audio file at a transducer on a connected wearable playback device, and initiating playback of a corresponding demonstration video file at the video interface; receiving a user command, at the command interface, to adjust a demonstration setting to emulate adjustment of a corresponding setting on the wearable audio device; and adjusting the binaural playback at the transducer on the wearable playback device based upon the user command.
Implementations may include one of the following features, or any combination thereof.
In particular aspects, the method further includes: receiving data about an environmental condition proximate the smart device; comparing the data about the environmental condition with an environmental condition threshold corresponding with a clarity level for the feature to be demonstrated; and recommending that the user modify the environmental condition in response to the data about the environmental condition deviating from the environmental condition threshold.
In some cases, the method further includes: receiving data about an environmental condition proximate the demonstration device; comparing the data about the environmental condition with an environmental condition threshold corresponding with a clarity level for the feature to be demonstrated; and either recommending that the user adjust an active noise reduction (ANR) feature in response to the data about the environmental condition deviating from the environmental condition threshold, or automatically adjusting the ANR feature in response to the data about the environmental condition deviating from the environmental condition threshold.
In particular implementations, the wearable playback device is the wearable audio device.
In some cases, the wearable playback device is distinct from the wearable audio device.
In certain aspects, the feature of the wearable audio device includes at least one of: an active noise reduction (ANR) feature, a controllable noise cancellation (CNC) feature, a compressive hear through feature, a level-dependent noise cancellation feature, a fully aware feature, a music on/off feature, a spatialized audio feature, a directionally focused listening feature, an environmental distraction masking feature, an ambient sound attenuation feature, a playback adjustment feature or a directionally adjusted ambient sound feature.
In some implementations, the method further includes initiating binaural playback of an additional demonstration audio file at the wearable playback device, where adjusting the binaural playback includes mixing the demonstration audio file and the additional demonstration audio file based upon the user command.
In particular implementations, the demonstration audio file includes a binaural recording of a sound environment, and adjusting the binaural playback at the wearable playback device includes applying at least one filter to alter the spectrum of the recording.
In certain aspects, the user command to adjust the demonstration setting is received at a user interface on the demonstration device.
In certain cases, adjusting the binaural playback includes simulating a beamforming process in a microphone array focused on an area of visual focus in the demonstration video file.
In some cases, the demonstration device includes a web browser or a solid state media player, and the binaural playback and demonstration settings for the audio demonstration mode are controlled using the web browser or are remotely controlled using the solid state media player.
In particular implementations, the demonstration device includes a sensor system coupled with the control circuit, where the control circuit is further configured to: receive data about an environmental condition proximate the demonstration device from the sensor system; compare the data about the environmental condition with an environmental condition threshold corresponding with a clarity level for the feature to be demonstrated; and recommend that the user modify the environmental condition in response to the data about the environmental condition deviating from the environmental condition threshold.
In certain cases, the demonstration device includes a sensor system coupled with the control circuit, where the control circuit is further configured to: receive data about an environmental condition proximate the demonstration device from the sensor system; compare the data about the environmental condition with an environmental condition threshold corresponding with a clarity level for the feature to be demonstrated; and either recommend that the user adjust an active noise reduction (ANR) feature in response to the data about the environmental condition deviating from the environmental condition threshold, or automatically adjust the ANR feature in response to the data about the environmental condition deviating from the environmental condition threshold.
Two or more features described in this disclosure, including those described in this summary section, may be combined to form implementations not specifically described herein.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, objects and advantages will be apparent from the description and drawings, and from the claims.
It is noted that the drawings of the various implementations are not necessarily to scale. The drawings are intended to depict only typical aspects of the disclosure, and therefore should not be considered as limiting the scope of the implementations. In the drawings, like numbering represents like elements between the drawings.
This disclosure is based, at least in part, on the realization that features in a wearable audio device can be beneficially demonstrated to a user. For example, a demonstration device in communication with a wearable playback device can be configured to demonstrate various features of a wearable audio device according to an initiated demonstration mode.
Commonly labeled components in the FIGURES are considered to be substantially equivalent components for the purposes of illustration, and redundant discussion of those components is omitted for clarity.
Conventional communication of the features of a wearable audio device involves reading descriptions about those features, or listening to audio (sometimes combined with video) that simulates a feature, e.g., active cancellation of ambient noise. The user in these scenarios does not interact with the demonstration except to trigger presentation of information.
In contrast, various implementations described herein involve a demonstration device and a wearable playback device that play a combination of binaural audio and video, with controls integrated into or overlaying the video portion of demonstration content. In some examples, the demonstration device includes the video portion of the demonstrated content as well as the integrated or overlaid controls. The controls emulate adjustment of the corresponding setting of a wearable audio device being demonstrated as if it were being used in the real-world setting portrayed in the video. The binaural audio portion of the demonstration content can be presented over a wearable playback device, which may be the wearable audio device being demonstrated or any other wearable audio device (e.g., headphones, earphones or wearable audio system(s)) that the user possesses. The combination of video, accompanying realistic binaural audio and feature-emulating controls that respond to user interaction make the demonstration more engaging and better communicate the capabilities of the wearable audio device being demonstrated.
In some examples, the setting of the wearable audio device being demonstrated may be a sound management feature, e.g., active noise reduction (ANR), controllable noise cancellation (CNC), an auditory compression feature, a level-dependent cancellation feature, a fully aware feature, a music on/off feature, a spatialized audio feature, a voice pick-up feature, a directionally focused listening feature, an audio augmented reality feature, a virtual reality feature or an environmental distraction masking feature.
The implementations described herein enable a user to experience features of a wearable audio device through a set of interactive controls using any type of hardware available to the user, e.g., any wearable playback device. The implementations described herein could be experienced by a user from his or her home, office, or any location where the user is able to establish a connection (e.g., network connection) between his or her wearable playback device and a demonstration device (e.g., a computer, tablet, smart phone, etc.). The implementations described herein could alternatively be experienced by a user in a store environment, or at a kiosk serving as the demonstration device.
It has become commonplace for those who either listen to electronically provided audio (e.g., audio from an audio source such as a mobile phone, tablet, or computer), those who simply seek to be acoustically isolated from unwanted or possibly harmful sounds in a given environment, and those engaging in two-way communications to employ wearable audio devices to perform these functions. For those who employ headphones or headset forms of wearable audio devices to listen to electronically provided audio, it has become commonplace for acoustic isolation to be achieved through the use of active noise reduction (ANR) techniques based on the acoustic output of anti-noise sounds in addition to passive noise reduction (PNR) techniques based on sound absorbing and/or reflecting materials. More advanced ANR features also include controllable noise canceling (CNC), which permits control of the level of the residual noise after cancellation, for example, by a user. CNC enables a user to select the amount of noise that is passed through the headphones to the user's ear—at one extreme, a user could select full ANR and cancel as much residual noise as possible, while at another extreme, a user could select full pass-through and hear the ambient noise as if he or she were not wearing headphones at all. In some examples, CNC can permit a user to control the volume of audio output regardless of the ambient acoustic volume. CNC can allow the user to adjust different levels of noise cancellation using one or more interface commands, e.g., by increasing noise cancellation or decreasing noise cancellation across a spectrum.
Aspects and implementations disclosed herein may be applicable to wearable playback devices that either do or do not support two-way communications, and either do or do not support active noise reduction (ANR). For wearable playback devices that do support either two-way communications or ANR, it is intended that what is disclosed and claimed herein is applicable to a wearable playback device incorporating one or more microphones disposed on a portion of the wearable playback device that remains outside an ear when in use (e.g., feedforward microphones), on a portion that is inserted into a portion of an ear when in use (e.g., feedback microphones), or disposed on both of such portions. Still other implementations of wearable playback devices to which what is disclosed and what is claimed herein is applicable will be apparent to those skilled in the art.
According to various implementations, a demonstration device is used in conjunction with a wearable playback device to demonstrate features of one or more wearable audio devices. These particular implementations can allow a user to experience functions of a wearable audio device that may not be available on the particular (wearable) playback device that the user possesses, or that are available on that (wearable) playback device but may otherwise go unnoticed or under-utilized. These implementations can enhance the user experience in comparison to conventional approaches.
As described herein, the term “playback device” can refer to any audio device capable of providing binaural playback of demonstration audio to a user. In many example implementations, a playback device is a wearable audio device such as headphones, earphones, audio glasses, body-worn speakers, open-ear audio devices, etc. In various particular cases, the playback device is a set of headphones. The playback device can be used to demonstrate features available on that device, or, in various particular cases, can be used to demonstrate features available on a distinct wearable audio device. That is, the “playback device” (or “wearable playback device”) is the device worn or otherwise employed by the user to hear demonstration audio during a demonstration process. As described herein, the “wearable audio device” is the subject device, the features of which are being demonstrated on the playback device. In some cases, the wearable audio device and the playback device are the same device. However, in many example implementations, the playback device is a distinct device from the wearable audio device having features that are being demonstrated. The term “demonstration device” can refer to any device with processing capability and connectivity to perform functions of an audio demonstration engine, described further herein. In various implementations, the demonstration device can include a computing device having a processor, memory and a communications system (e.g., a network interface or other similar interface) for communicating with the playback device. In certain implementations, the demonstration device includes a smart phone, tablet, PC, solid state machine, etc.
As described further herein, the demonstration device 20 is configured to run an audio demonstration engine to manage demonstration functions according to various implementations. This audio demonstration engine 210 is illustrated in the system diagram in
Returning to the example depicted in
To initiate playback of the demonstration video and audio files, a user may interact with controls 90 on the demonstration device 20 (e.g., by touching the controls or actuating a linked device such as a mouse, stylus, etc.). In particular cases, the controls 90 are presented on the video interface 50 or proximate the video interface 50 at the demonstration device 20, e.g., for ease of manipulation. In other cases, for example, where the playback device 30 is the same device (or same device type) as the wearable audio device 40, a user may initiate playback of the demonstration video and audio files using controls (e.g., similar to controls 90) on the playback device 30, or a distinct connected device, such as a smart phone running an application with a user interface. These cases may be beneficial where the user 10 can (or has) downloaded or otherwise activated a feature for use on the wearable audio device 40, and wishes to control demonstration of those features with controls on the playback device 30.
In the example shown in
As shown in the example depicted in
When a new mode is selected from the user interface controls 120, the binaural audio is adjusted at the demonstration device 20 to simulate for the user 10 what that mode would sound like in the environment depicted in the corresponding video. For example, when Aware mode is selected, the binaural audio is adjusted at the demonstration device 20 to simulate for the user 10 at the playback device 30 what Aware mode would sound like in the environment depicted. Similarly, when Cancel mode is selected, the binaural audio is adjusted at the demonstration device 20 to simulate for the user 10 at the playback device 30 what full ANR would sound like in the environment depicted. And when Smart Aware mode is selected, the binaural audio is adjusted at the demonstration device 20 to simulate for the user 10 at the playback device 30 what Smart ANR would sound like in the environment depicted.
In the particular example shown in
In response to a user command for Music On/Play (Music control 160), the audio demonstration engine 210 (
Turning to
In implementations that include ANR (which may include CNC), the inner microphone 176 may be a feedback microphone and the outer microphone 182 may be a feedforward microphone. In such implementations, each earphone 170 includes an ANR circuit 184 that is in communication with the inner and outer microphones 176 and 182. The ANR circuit 184 receives an inner signal generated by the inner microphone 176 and an outer signal generated by the outer microphone 182, and performs an ANR process for the corresponding earpiece 170. The process includes providing a signal to an electroacoustic transducer (e.g., speaker) 186 disposed in the cavity 174 to generate an anti-noise acoustic signal that reduces or substantially prevents sound from one or more acoustic noise sources that are external to the earphone 170 from being heard by the user. As described herein, in addition to providing an anti-noise acoustic signal, electroacoustic transducer 186 can utilize its sound-radiating surface for providing an audio output for playback.
In implementations of a playback device 30 that include an ANR circuit 184, the corresponding ANR circuit 184A,B is in communication with the inner microphones 176, outer microphones 182, and electroacoustic transducers 186, and receives the inner and/or outer microphone signals. In certain examples, the ANR circuit 184A,B includes a microcontroller or processor having a digital signal processor (DSP) and the inner signals from the two inner microphones 176 and/or the outer signals from the two outer microphones 182 are converted to digital format by analog to digital converters. In response to the received inner and/or outer microphone signals, the ANR circuit 184 can communicate with a control circuit 188 to initiate various actions. For example, audio playback may be initiated, paused or resumed, a notification to a wearer may be provided or altered, and a device in communication with the wearable audio device may be controlled. In implementations of the playback device 30 that do not include an ANR circuit 184, the microcontroller or processor (e.g., including a DSP) can reside within the control circuit 188 and perform associated functions described herein.
The playback device 30 may also include a power source 190. The control circuit 188 and power source 190 may be in one or both of the earpieces 170 or may be in a separate housing in communication with the earpieces 170. The playback device 30 may also include a network interface 192 to provide communication between the playback device 30 and one or more audio sources and other audio devices (including demonstration device, as described herein). The network interface 192 may be wired (e.g., Ethernet) or wireless (e.g., employ a wireless communication protocol such as IEEE 802.11, Bluetooth, Bluetooth Low Energy (BLE), or other local area network (LAN) or personal area network (PAN) protocols).
Network interface 192 is shown in phantom, as portions of the interface 192 may be located remotely from playback device 30. The network interface 192 can provide for communication between the playback device 30, audio sources and/or other networked (e.g., wireless) speaker packages and/or other audio playback devices via one or more communications protocols. The network interface 192 may provide either or both of a wireless interface and a wired interface. The wireless interface can allow the playback device 30 to communicate wirelessly with other devices in accordance with any communication protocol noted herein.
In some cases, the network interface 192 may also include a network media processor for supporting, e.g., Apple AirPlay® (a proprietary protocol stack/suite developed by Apple Inc., with headquarters in Cupertino, Calif., that allows wireless streaming of audio, video, and photos, together with related metadata between devices) or other known wireless streaming services (e.g., an Internet music service such as: Pandora®, a radio station provided by Pandora Media, Inc. of Oakland, Calif., USA; Spotify®, provided by Spotify USA, Inc., of New York, N.Y., USA); or vTuner®, provided by vTuner.com of New York, N.Y., USA); and network-attached storage (NAS) devices). As noted herein, in some cases, control circuit 188 can include a processor and/or microcontroller, which can include decoders, DSP hardware/software, etc. for playing back (rendering) audio content at electroacoustic transducers 186. In some cases, network interface 192 can also include Bluetooth circuitry for Bluetooth applications (e.g., for wireless communication with a Bluetooth enabled audio source such as a smartphone or tablet). In operation, streamed data can pass from the network interface 192 to the control circuit 188, including the processor or microcontroller. The control circuit 188 can execute instructions (e.g., for performing, among other things, digital signal processing, decoding, and equalization functions), including instructions stored in a corresponding memory (which may be internal to control circuit 188 or accessible via network interface 192) or other network connection (e.g., cloud-based connection). The control circuit 188 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The control circuit 188 may provide, for example, for coordination of other components of the playback device 30, such as control of user interfaces (not shown) and applications run by the playback device 30.
In implementations of the playback device 30 having an ANR circuit 184, that ANR circuit 184 can also include one or more digital-to-analog (D/A) converters for converting the digital audio signal to an analog audio signal. This audio hardware can also include one or more amplifiers which provide amplified analog audio signals to the electroacoustic transducer(s) 186, which each include a sound-radiating surface for providing an audio output for playback. In addition, the audio hardware may include circuitry for processing analog input signals to provide digital audio signals for sharing with other devices. However, in additional implementations of the playback device 30 that do not include an ANR circuit 184, these D/A converters, amplifiers and associated circuitry can be located in the control circuit 188.
The memory in control circuit 188 can include, for example, flash memory and/or non-volatile random access memory (NVRAM). In some implementations, instructions (e.g., software) are stored in an information carrier. The instructions, when executed by one or more processing devices (e.g., the processor or microcontroller in control circuit 188), perform one or more processes, such as those described elsewhere herein. The instructions can also be stored by one or more storage devices, such as one or more (e.g. non-transitory) computer- or machine-readable mediums (for example, the memory, or memory on the processor/microcontroller). It is understood that portions of the control circuit 188 (e.g., instructions) could also be stored in a remote location or in a distributed location, and could be fetched or otherwise obtained by the control circuit 188 (e.g., via any communications protocol described herein) for execution.
One or more portions of the audio demonstration engine 210 (e.g., software code and/or logic infrastructure) can be stored on or otherwise accessible to one or more demonstration devices 20, which may be connected with the playback device 30 by any communications connection described herein (e.g., via wireless or hard-wired connection). In various additional implementations, the demonstration device(s) 20 can include a separate audio playback device, such as a conventional speaker, and can include a network interface or other communications module for communicating with the playback device 30 or another device in a network or a communications range.
In particular,
Returning to
Audio demonstration engine 210 can be coupled (e.g., wirelessly and/or via hardwired connections) with a library 240, which can include demonstration audio files 250 and demonstration video files 265 for playback (e.g., streaming) at the (wearable) playback device 30 and/or another audio playback device, e.g., speakers 110 on the demonstration device 20 (
Library 240 can also be associated with digital audio sources accessible via network interface 192 (
Additionally, the demonstration audio file(s) 250 can include multiple sets of audio files, with each set representing a different combination of the demonstration settings available in the controls 90 (
In some particular implementations, the demonstration video file(s) 265 can include a video file (or be paired with a corresponding audio file) for playback on an interface (e.g., video interface 50 on demonstration device 20 in
Audio demonstration engine 210 can also be coupled with a settings library 260 containing demonstration mode settings 270 for one or more audio demonstration modes. The demonstration mode settings 270 are associated with operating modes of a model of the device being used for audio playback during the demonstration (i.e., the wearable audio device (
For example, the demonstration mode settings 270 can dictate how the device being demonstrated would play back audio to the user if the user were wearing the that device by simulating how play back would sound on the wearable audio device 40. That is, the demonstration mode settings 270 can indicate values (and adjustments) for one or more control settings (e.g., acoustic settings) on the playback device 30 to provide particular wearable audio device features that affect what the user would hear were they actually using the device being demonstrated in the situation portrayed in the demonstration. These demonstration mode settings 270 can be used to demonstrate wearable audio device features by manipulating and mixing a user's ability to hear ambient sound surrounding the user and their streamed audio content. These wearable audio device features can include at least one of: a compressive hear through feature (also referred to as automatic, level-dependent dynamic noise cancellation, e.g., as described in U.S. patent application Ser. No. 16/124,056, filed Sep. 6, 2018, titled “Compressive Hear-through in Personal Acoustic Devices”, which is herein incorporated by reference in its entirety), a fully aware feature (e.g., reproduction of the external environment without any noise reduction/cancelation), a music on/off feature, a spatialized audio feature, a directionally focused listening feature, an active noise reduction (ANR) feature a controllable noise cancellation (CNC) feature, a level-dependent cancellation feature, an environmental distraction masking feature, an ambient sound attenuation feature, a playback adjustment feature (e.g., adjusting an audio stream) or a directionally adjusted ambient sound feature (e.g., varying based upon noise in the environment, presence and/or directionality of voices in the environment, user motion such as head motion, whether the user is speaking, and/or proximity to objects in the environment). Settings controlled to achieve these features can include one or more of: a filter to control the amount of noise reduction or hear-through provided to the user, a directivity of a microphone array in the (wearable) playback device 30, a microphone array filter on the microphone array in the (wearable) playback device 30, a volume of audio provided to the user at the (wearable) playback device 30, a number of microphones used in the microphone array at the device being demonstrated (e.g., wearable audio device 40), ANR or awareness settings, or processing applied to one or more microphone inputs.
As noted herein, demonstration device 20 can be used to initiate binaural playback of the demonstration audio file 250 at the playback device 30 and the corresponding demonstration video file 265 at a video interface (e.g., video interface 50 at the demonstration device 20) to demonstrate functions of the playback device or a distinct wearable audio device to the user. As noted herein, it is understood that one or more functions of the audio demonstration engine 210 can be stored, accessed and/or executed at demonstration device 20.
Audio demonstration engine 210 may also be configured to receive sensor data from a sensor system located at the demonstration device 20, at the wearable playback device 30 or connected with the demonstration device 20 via another device connection (e.g., a wireless connection with another smart device having the sensor system). This sensor data can be used, for example, to detect environmental or ambient noise conditions in the vicinity of the demonstration to make adjustments to the demonstration audio and/or video based on the detected ambient noise. That is, the sensor data can be used to help the user experience the acoustic environment depicted on the video interface 50 as though he/she were in the shoes of the depicted user 60 (
According to various implementations, the demonstration device 20 runs the audio demonstration engine 210, or otherwise accesses program code for executing processes performed by audio demonstration engine 210 (e.g., via a network interface). Audio demonstration engine 210 can include logic for processing commands from the user about the feature demonstrations. Additionally, audio demonstration engine 210 can include logic for adjusting binaural playback at the playback device 30 and video playback (e.g., at the demonstration device 20 or another connected device) according to user input. The audio demonstration engine 210 can also include logic for processing sensor data, e.g., data about ambient acoustic signals from microphones.
As noted herein, audio demonstration engine 210 can include logic for performing audio demonstration functions according to various implementations.
In various implementations, the audio demonstration engine 210 is configured to receive a command, at the demonstration device 20, to initiate an audio demonstration mode (process 410,
In some implementations, in response to the user command, the audio demonstration engine 210 can initiate binaural playback of a demonstration audio file 250 at the wearable playback device 30 (e.g., a set of headphones, earphones, smart glasses, etc.) and playback of the corresponding demonstration video file 265 at a video interface that is coupled with the demonstration device 20 (e.g., a video interface at the demonstration device 20, or a video interface on another smart device connected with the demonstration device 20) (process 420,
While the demonstration audio file 250 and demonstration video file 265 are being played back, the audio demonstration engine 210 is further configured to receive a user command to adjust a demonstration setting at the demonstration device 20 (e.g., via controls 90 or other enabled controls) to emulate adjustment of a corresponding setting on the wearable audio device 40 (
Based upon the user command (process 430,
In one particular example, binaural playback can be adjustable to simulate manually controlled CNC and auditory limiting (also referred to as compressive hear through). In this scenario, the demonstration audio file(s) 250 include a mixing of four synchronized binaural audio files. One such file can represent the sound of a scene as recorded/mixed, as it would be heard open-eared (i.e., as if the user were not wearing any headphones). This demonstration audio file 250 can be referred to as the ‘aware’ file, and is generated using filters that emulate the net effect of an Aware mode insertion gain applied in the wearable device to be demonstrated. An example of an Aware mode insertion gain is described in U.S. Pat. No. 8,798,283, the entire content of which is incorporated by reference herein. A second file can represent the sound of a scene with noise cancellation at its highest level. This demonstration audio file 250 is created by filtering through the attenuation response of the wearable audio device 40 being demonstrated, in full noise cancelling, using digital filtering techniques. This filtering can be performed once and stored as an additional demonstration audio file 250, which can be referred to as the ‘cancel’ file. A third demonstration audio file 250 is created by filtering the initial (aware) signal through a compressor starting at a threshold and compression ratio representative of the wearable audio device 40 being demonstrated. The filtering is performed across the frequency spectrum to emulate the processing performed by the ANR circuit and/or control circuit in the wearable audio device 40 to be demonstrated. This audio demonstration audio file 250 is then stored, and can be referred to as the ‘smart’ file.
In this example, in order to demonstrate CNC to the user, as the adjustment (e.g., via a video interface such as interface 50 in
In particular cases, the audio demonstration engine 210 is configured to detect that the user is attempting to adjust a control value (e.g., via the interface at the demonstration device 20) while the user is in an environment where that control adjustment will not be detectable (i.e., audible) to the user when played back on the playback device 30. In these cases, the audio demonstration engine 210 can receive data about ambient acoustic characteristics (e.g., ambient noise level (SPL)) from one or more microphones in the demonstration device 20, playback device 30, and/or a connected sensor system. In one example, where the user 10 is trying to adjust the compressive hear through threshold in an environment having a detected SPL that is quieter than that threshold, the audio demonstration engine 210 is configured to provide the user 10 with a notification (e.g., user interface notification such as a pop-up notification or voice interface notification) that the ambient noise level is insufficient to demonstrate this feature.
While various audio device features are discussed herein by way of example, the audio demonstration engine 210 is configured to demonstrate any number of features of a wearable audio device (e.g., wearable audio device 40,
In some additional cases, the audio demonstration engine 210 is further configured to initiate binaural playback of an additional demonstration audio file 250 at the demonstration device 20. In some cases, the additional demonstration audio file 250 is played back with the original demonstration audio file 250, such that adjusting the original demonstration audio file 250 includes mixing the two demonstration audio files 250 for playback at the transducers on the demonstration device 30. In some cases, these audio demonstration files 250 are mixed based upon the user command (e.g., a command to demonstrate a particular feature). In certain cases, the demonstration audio files 250 include WAV files or compressed audio files, and adjusting the playback of the demonstration video file 265 is based upon mixing the demonstration audio files 250 (e.g., as described with reference to process 440 in
In additional implementations, where a user intends to demonstrate sound management features of a wearable audio device (e.g., wearable audio device 40,
According to various implementations, the audio demonstration engine 210 uses feedback from one or more external microphones on the playback device 30 or the demonstration device 20 to adjust the playback (e.g., volume or noise cancellation settings) of demonstration audio from the demonstration device 20 based on the detected level of ambient noise in the environment. In some cases, this aids in providing a consistent demonstration experience to the user, e.g., where variations in the speaker volume and/or speaker loudness settings at the (wearable) playback device 30 as well as speaker orientation at the playback device 30 affect the demonstration. The audio demonstration engine 210 can use the feedback from the microphones on the playback device 30 or the demonstration device 20 (and/or sensors on another connected device such as a smart device) to adjust the playback volume of the demonstration audio at the playback device 30.
In particular example implementations, where the playback device 30 includes an ANR circuit 184 (
In certain cases, the audio demonstration engine 210 is configured to automatically engage the ANR feature on the playback device 30 when initiating the demonstration at the demonstration device 20, for example, to draw the user's attention to the demonstration. In this case, in response to the receiving user prompt to initiate the demonstration, the audio demonstration engine 210 can detect the device capabilities of the playback device 30 as including ANR (e.g., detecting an ANR circuit 184), and can send instructions to the playback device 30 to initiate the ANR circuit 184 during (and in some cases, prior to and/or after) playback of the demonstration audio file 250. In additional cases, the audio demonstration engine 210 is configured to instruct the user 10 to engage the ANR feature in response to the user command to initiate the demonstration, e.g., to focus the user's attention on the demonstration.
In cases where the audio demonstration engine 210 is demonstrating CNC functions of a wearable audio device 40, the audio demonstration engine 210 can apply filters to the binaural audio that emulate the auditory effect of CNC. In certain cases, the audio demonstration engine 210 is configured to apply filters to the binaural audio that emulate the auditory effect of a set of distinct CNC filters, in a sequence, to the demonstration audio file 250 to demonstrate CNC capabilities of the wearable audio device 40. For example, the audio demonstration engine 210 can apply filters to the binaural audio to demonstrate progressive or regressive sequences of noise cancelling, e.g., to emulate the adjustments that the user can make to noise cancelling functions on the wearable audio device 40 being demonstrated. In some particular cases, the audio demonstration engine 210 (in controlling playback at the playback device 30) applies a first set of filters for a period, then adjusts to a second set of filters for a period, then adjusts to a third set of filters for another period (where periods are identical or distinct from one another). This can be done automatically or via a user input. The user (e.g., user 10,
In additional implementations, the audio demonstration engine 210 can be configured to provide A/B comparisons of processing performed with and without demonstrated functions. Such A/B comparisons can be used in CNC mode, ANR mode, etc. In various implementations, the audio demonstration engine 210 is configured to play back recorded feeds of processed audio and unprocessed audio for the user (via playback device 30) to demonstrate acoustic features of a wearable audio device such as wearable audio device 40. In response to a user command to play back a recorded feed of processed audio (e.g., via an interface command), the audio demonstration engine 210 initiates playback of that processed audio at the playback device 30. Similarly, the audio demonstration engine 210 can initiate playback of unprocessed audio at the playback device 30 in response to a user command (e.g., via an interface command, such as via controls 90 on the video interface 50 shown in
The audio demonstration engine 210 is described in some examples as including logic for performing one or more functions. In various implementations, the logic in audio demonstration engine 210 can be continually updated based upon data received from the user (e.g., user selections or commands), sensor data (received from one or more sensors such as microphones, accelerometer/gyroscope/magnetometer(s), etc.) settings updates, as well as updates and/or additions to the library 240 (
As described herein, user prompts can include an audio prompt provided at the demonstration device 20, and/or a visual prompt or tactile/haptic prompt provided at the demonstration device 20 or a distinct device (e.g., an additional connected device or playback device 30). In some particular implementations, actuation of the prompt can be detectable by the demonstration device 20 or any other device where the user is initiating a demonstration, and can include a gesture, tactile actuation and/or voice actuation by user. For example, the user can initiate a head nod or shake to indicate a “yes” or “no” in response to a prompt, which is detected using a head tracker in the playback device 30. In additional implementations, the user can tap a specific surface (e.g., a capacitive touch interface) on the demonstration device 20 and/or playback device 30 to actuate the prompt, or initiate a tactile actuation (e.g., via detectable vibration or movement at one or more sensors such as tactile sensors, accelerometers/gyroscopes/magnetometers, etc.). In still other implementations, the user can speak into a microphone at demonstration device 20 and/or playback device 30 to actuate the prompt and initiate the personalization functions described herein.
In some particular implementations, actuation of the prompt is detectable by the demonstration device 20, such as by a touch screen, vibrations sensor, microphone or other sensor on the demonstration device 20. In certain cases, the prompt can be actuated on the demonstration device 20, regardless of the source of the prompt. In other implementations, the prompt is only actuatable on the device from which it is presented.
With continuing reference to the example in
The functionality described herein, or portions thereof, and its various modifications (hereinafter “the functions”) can be implemented, at least in part, via a computer program product, e.g., a computer program tangibly embodied in an information carrier, such as one or more non-transitory machine-readable media, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.
A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
Actions associated with implementing all or part of the functions can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the functions can be implemented as, special purpose logic circuitry, e.g., an FPGA and/or an ASIC (application-specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Components of a computer include a processor for executing instructions and one or more memory devices for storing instructions and data.
In various implementations, electronic components described as being “coupled” can be linked via conventional hard-wired and/or wireless means such that these electronic components can communicate data with one another. Additionally, sub-components within a given component can be considered to be linked via conventional pathways, which may not necessarily be illustrated.
A number of implementations have been described. Nevertheless, it will be understood that additional modifications may be made without departing from the scope of the inventive concepts described herein, and, accordingly, other embodiments are within the scope of the following claims.