Adjusting a playback device

Information

  • Patent Grant
  • 11853184
  • Patent Number
    11,853,184
  • Date Filed
    Monday, August 29, 2022
    a year ago
  • Date Issued
    Tuesday, December 26, 2023
    4 months ago
Abstract
Certain embodiments provide methods and systems for managing a sound profile. An example playback device includes a network interface and a non-transitory computer readable storage medium having stored therein instructions executable by the processor. When executed by the processor, the instructions are to configure the playback device to receive, via the network interface over a local area network (LAN) from a controller device, an instruction. The example playback device is to obtain, based on the instruction, via the network interface from a location outside of the LAN, data comprising a sound profile. The example playback device is to update one or more parameters at the playback device based on the sound profile. The example playback device is to play back an audio signal according to the sound profile.
Description
BACKGROUND
Field of the Invention

The present invention is related to the area of audio devices, and more specifically related to techniques for adjusting a speaker system or loudspeaker via a network.


Background

Designing and fine tuning of loudspeakers are often a laborious process. In a typical process, certain electrical components have to be repeatedly changed or adjusted to generate a new equalization or new firmware has to be upgraded on some modern products. Typically during development, a loudspeaker is placed inside a large anechoic chamber where acoustic measurements are gradually taken. After each measurement, the product is removed from the chamber and brought out to be adjusted and then setup again to be re-measured. The process often takes days or weeks until the final sound of the loudspeaker is determined.





BRIEF DESCRIPTION OF THE DRAWINGS

Certain embodiments of the present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:



FIG. 1 shows an example configuration in which certain embodiments may be practiced;



FIG. 2A shows an example functional block diagram of a player in accordance with certain embodiments;



FIG. 2B shows an example of controllers that may be used to remotely control one of more players of FIG. 1;



FIG. 2C shows an example internal functional block diagram of a controller in accordance with certain embodiments;



FIG. 3 shows an example interface in an embodiment to allow a user to graphically adjust various settings via a network;



FIG. 4 shows a flowchart or process of adjusting various settings in a speaker system; and



FIG. 5 shows a flowchart or process of sharing a profile between two remotely separated sound systems.





Certain embodiments will be better understood when read in conjunction with the provided drawings, which illustrate examples. It should be understood, however, that the embodiments are not limited to the arrangements and instrumentality shown in the attached drawings.


DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS

The detailed description of certain embodiments is presented largely in terms of procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment. The appearances of the phrase “in one embodiment” or “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the order of blocks in process flowcharts or diagrams representing one or more embodiments do not inherently indicate any particular order or imply any limitations.


Certain embodiments provide techniques for adjusting loudspeakers (referred to herein interchangeably as speakers) via the Internet. The adjustment includes at least tuning, configuration and creation of customized equalizers (EQs). In one embodiment, a graphic interface is provided to tune a loudspeaker, and allows a user to do quick iteration on the final “sound” of the loudspeaker. In another embodiment, a set of settings can be remotely adjusted or shared with another speaker system.


In an example application, when loudspeakers are placed in a listening environment, a customer home or a remote location, the loudspeakers sound can be adjusted remotely by a professional or an experienced user through the Internet. This allows a listener to be able to select his/her favorite sound from a variety of options, and sometimes share his/her sound with another listener remotely located.


In certain embodiments, the loudspeaker incorporates a method to connect the speaker to the network via a connection, such as Ethernet or wireless 802.11n. For example, the Internet Protocol (IP) address of the loudspeaker is typed into a computer, and the computer screen displays a loudspeaker parameter configuration layout. In certain embodiments, a configuration profile can be created to specify configuration values for one or more loudspeaker parameters including tweeter, midrange, woofer, etc. A type, frequency, gain, quality factor, etc., can be set for each parameter.


Once “logged in” to the loudspeaker, the current settings are loaded into a webpage and/or other presentation interface, for example. A user can then adjust any/all of the items in each area. Once the values are entered into each area, the values are updated in “real-time” (or substantially real-time accounting for some system processing, storage, and/or transmission delay, for example) on the loud speaker.


In certain embodiments, the configuration webpage can be designed specifically for the loudspeaker under development. For example, if a loudspeaker includes five transducers, five sections can be quickly created such that the webpage has the current number of “tuneable” sections.


Thus, certain embodiments provide for speaker configuration, configuration profile creation, and configuration profile storage. The configuration profile can be referred from one user to another, from one speaker to another, and so on. A referred profile can be accessed and implemented at one or more connected speakers to configure the speakers and/or other playback devices for playback output, for example.


Brief Description

Certain embodiments provide a method for managing a sound profile. The example method includes accessing a playback device on a network. The example method includes displaying a graphic interface to allow a user to adjust the sound profile, wherein the sound profile includes a plurality of parameters for user adjustment. The example method includes saving the sound profile. The example method includes processing an audio signal at the playback device according to the sound profile, wherein user adjustments to the sound profile are used to configure the playback device to process the audio signal upon saving the sound profile.


Certain embodiments provide a computer readable medium including a set of instructions for execution by a computer device, the set of instructions, when executed, implementing a method for managing a sound profile. The example method includes accessing a playback device on a network. The example method includes displaying a graphic interface to allow a user to adjust the sound profile, wherein the sound profile includes a plurality of parameters for user adjustment. The example method includes saving the sound profile. The example method includes processing an audio signal at the playback device according to the sound profile, wherein user adjustments to the sound profile are used to configure the playback device to process the audio signal upon saving the sound profile.


Certain embodiments provide a speaker configuration system. The example system includes a computing device. The example computing device includes an application module to facilitate control functions for a playback device including access to a sound profile to configure the playback device. The example computing device includes an interface to allow a user to adjust the sound profile, wherein the sound profile includes a plurality of playback device parameters for user adjustment. The example computing device is to save the sound profile and facilitate application of the sound profile to the playback device to configure output of multimedia content via the playback device.


Examples

Referring now to the drawings, in which like numerals refer to like parts throughout the several views. FIG. 1 shows an example configuration 100 in which the present invention may be practiced. The configuration may represent, but not be limited to, a part of a residential home, a business building or a complex with multiple zones. There are a number of multimedia players of which three examples 102, 104 and 106 are shown as audio devices. Each of the audio devices may be installed or provided in one particular area or zone and hence referred to as a zone player herein.


As used herein, unless explicitly stated otherwise, a track and an audio source are used interchangeably, an audio source or audio sources are in digital format and can be transported or streamed across a data network. To facilitate the understanding of the present invention, it is assumed that the configuration 100 represents a home. Thus, the zone players 102 and 104 may be located in two of the bedrooms respectively while the zone player 106 may be installed in a living room. All of the zone players 102, 104 and 106 are coupled directly or indirectly to a data network 108 that is also referred to as ad hoc network formed by a plurality of zone players and one or more controllers. In addition, a computing device 110 is shown to be coupled on the network 108. In reality, any other devices such as a home gateway device, a storage device, or an MP3 player may be coupled to the network 108 as well.


The network 108 may be a wired network, a wireless network or a combination of both. In one example, all devices including the zone players 102, 104 and 106 are wirelessly coupled to the network 108 (e.g., based on an industry standard such as IEEE 802.11). In yet another example, all devices including the zone players 102, 104 and 106 are part of a local area network that communicates with a wide area network (e.g., the Internet).


All devices on the network 108 may be configured to download and store audio sources or receive streaming audio sources. For example, the computing device 110 can download audio sources from the Internet and store the downloaded sources locally for sharing with other devices on the Internet or the network 108. The zone player 106 can be configured to receive streaming audio source and share the source with other devices, Shown as a stereo system, the device 112 is configured to receive an analog source (e.g., from broadcasting) or retrieve a digital source (e.g., from a compact disk). The analog sources can be converted to digital sources. In certain embodiments, all audio sources, regardless of where they are located or how they are received, may be shared among the devices on the network 108.


Any device on the network 108 may be configured to control operations of playback devices, such as the zone players 102, 104 and 106. In particular, one or more controlling devices 140 and 142 are used to control zone players 102, 104 and 106 as shown in FIG. 1. The controlling devices 140 and 142 may be portable, for example. The controlling devices 140 and 142 may remotely control the zone players via a wireless data communication interface (e.g., infrared, radio, wireless standard IEEE 802.11b or 802.11g, etc.). In an embodiment, besides controlling an individual zone player, the controlling device 140 or 142 is configured to manage audio sources and other characteristics of all the zone players regardless where the controlling device 140 or 142 is located in a house or a confined complex.


In certain embodiments, a playback device may communicate with and/or control other playback devices. For example, one zone player may provide data to one or more other zone players. A zone player may serve as a master device in one configuration and a slave device in another configuration, for example.


Also shown is a computing device 144 provided to communicate with one or all of the devices on the network 108. The computing device 144 may be a desktop computer, a laptop computer, a tablet, a smart phone or any computing device with a display screen. According to an embodiment, each of the networked devices on the network 108 has an IP address. The computing device 144 is used by a user to access one or all of the zone players to adjust a sound profile. Depending on implementation, the sound profile includes various filters, frequencies, equalizers, gains or other factors that may affect a listening experience.


Referring now to FIG. 2A, there is shown an example functional block diagram of a playback device, such as a zone player 200. The zone player 200 includes a network interface 202, a processor 204, a memory 206, an audio processing circuit 210, a setting module 212, an audio amplifier 214 and a set of speakers. The network interface 202 facilitates a data flow between a data network (e.g., the data network 108 of FIG. 1) and the zone player 200 and typically executes a special set of rules (e.g., a protocol) to send data back and forth. One example protocol is TCP/IP (Transmission Control Protocol/Internet Protocol) commonly used in the Internet. In general, a network interface manages the assembling of an audio source or file into smaller packets that are transmitted over the data network or reassembles received packets into the original source or file. In addition, the network interface 202 handles the address part of each packet so that it gets to the right destination or intercepts packets destined for the zone player 200.


In the example of FIG. 2A, the network interface 202 may include either one or both of a wireless interface 216 and a wired interface 217. The wireless interface 216, such as a radiofrequency (RE) interface, provides network interface functions wirelessly for the zone player 200 to communicate with other devices in accordance with a communication protocol (such as the wireless standard IEEE 802.11a, 802.11b or 802.11g). The wired interface 217 provides network interface functions by a wired connection (e.g., an Ethernet cable). In an embodiment, a zone player, referred to as an access zone player, includes both of the interfaces 216 and 217, and other zone players include only the RF interface 216. Thus these other zone players communicate with other devices on a network or retrieve audio sources via the access zone player. The processor 204 is configured to control the operation of other parts in the zone player 200. The memory 206 may be loaded with one or more software modules that can be executed by the processor 204 to achieve desired tasks.


In the example of FIG. 2A, the audio processing circuit 210 resembles most of the circuitry in an audio playback device and includes one or more digital-to-analog converters (DAC), an audio preprocessing part, an audio enhancement part or a digital signal processor and others. In operation, when an audio source (e.g., audio source) is retrieved via the network interface 202, the audio source is processed in the audio processing circuit 210 to produce analog audio signals. The processed analog audio signals are then provided to the audio amplifier 214 for playback on speakers. In addition, the audio processing circuit 210 may include necessary circuitry to process analog signals as inputs to produce digital signals for sharing with other devices on a network.


Depending on an exact implementation, the setting module 212 may be implemented within the audio processing circuit 210 or as a combination of hardware and software. The setting module 212 is provided to access different sound profiles stored in the memory 206 of the zone player and work with the audio processing circuit 210 to effectuate the sound quality or sound experience.


In the example of FIG. 2A, the audio amplifier 214 includes an analog circuit that powers the provided analog audio signals to drive one or more speakers 216. In an embodiment, the amplifier 214 is automatically powered off when there is no incoming data packets representing an audio source or powered on when the zone player is configured to detect the presence of the data packets.


In the example of FIG. 2A, the speakers 216 may be in different configurations. For example, the speakers may be a configuration of:

    • 1) 2-channel: the stereo audio player is connected to two speakers: left and right speakers to form a stereo sound;
    • 2) 3-channel (or 2.1 sound effects): the stereo audio player is connected to three speakers: left and right speakers and a subwoofer to form a stereo sound; and
    • 3) 6-channel (or 5.1 sound effects): the stereo audio player is connected to five speakers: front left, front right, center, rear left and rear right speakers and a subwoofer to form a surrounding sound.


      Unless specifically stated herein, a device being adjusted includes one or more speakers. When a profile is determined, a sound may be produced collectively from the speakers, from one of the speakers, and so on.


Referring now to FIG. 2B, there is shown an example of a controller 240, which may correspond to the controlling device 140 or 142 of FIG. 1. The controller 240 may be used to facilitate the control of multi-media applications, automation and others in a living complex. In particular, the controller 240 is configured to facilitate a selection of a plurality of audio sources available on the network, controlling operations of one or more zone players (e.g., the zone player 200) through a RF interface corresponding to the wireless interface 216 of FIG. 2A. According to one embodiment, the wireless interface is based on an industry standard (e.g., infrared, radio, wireless standard IEEE 802.11a, 802.11b or 802.11g). When a particular audio source is being played in the zone player 200, a picture, if there is any, associated with the audio source may be transmitted from the zone player 200 to the controller 240 for display. In an embodiment, the controller 240 is used to select an audio source for playback. In another embodiment, the controller 240 is used to manage (e.g., add, delete, move, save, or modify) a playlist.


In the example of FIG. 2B, the user interface for the controller 240 includes a screen 242 (e.g., a LCD screen) and a set of functional buttons as follows: a “zones” button 244, a “back” button 246, a “music” button 248, a scroll wheel 250, “ok” button 252, a set of transport control buttons 254, a mute button 262, a volume up/down button 264, a set of soft buttons 266 corresponding to the labels 268 displayed on the screen 242.


In the example of FIG. 2B, the screen 242 displays various screen menus in response to a selection by a user. In an embodiment, the “zones” button 244 activates a zone management screen or “Zone Menu” to allow a user to group players in a number of desired zones so that the players are synchronized to play an identical playlist or tracks. The “back” button 246 may lead to different actions depending on the current screen. In an embodiment, the “back” button triggers the current screen display to go back to a previous one. In another embodiment, the “back” button negates the user's erroneous selection. The “music” button 248 activates a music menu, which allows the selection of an audio source (e.g., a song track) to be added to a playlist (e.g., a music queue) for playback.


In the example of FIG. 2B, the scroll wheel 250 is used for selecting an item within a list, whenever a list is presented on the screen 242. When the items in the list are too many to be accommodated in one screen display, a scroll indicator such as a scroll bar or a scroll arrow is displayed beside the list. When the scroll indicator is displayed, a user may rotate the scroll wheel 250 to either choose a displayed item or display a hidden item in the list. The “ok” button 252 is use to confirm the user selection on the screen 242 or activate a playback of an item.


In the example of FIG. 2B, there are three transport buttons 254, which are used to control the effect of the currently playing track. For example, the functions of the transport buttons may include play/pause and forward/rewind a track, move forward to the next track, or move backward to the previous track. According to an embodiment, pressing one of the volume control buttons such as the mute button 262 or the volume up/down button 264 activates a volume panel. In addition, there are three soft buttons 266 that can be activated in accordance with the labels 268 on the screen 242. It can be understood that, in a multi-zone system, there may be multiple audio sources being played respectively in more than one zone players. The music transport functions described herein shall apply selectively to one of the sources when a corresponding zone player is selected.



FIG. 2C illustrates an internal functional block diagram of an example controller 270, which may correspond to the controller 240 of FIG. 2B. The screen 272 on the controller 270 may be a LCD screen. The screen 272 communicates with and is commanded by a screen driver 274 that is controlled by a microcontroller (e.g., a processor) 276. The memory 282 may be loaded with one or more application modules 284 that can be executed by the microcontroller 276 with or without a user input via the user interface 278 to achieve desired tasks.


In an embodiment, an application module is configured to facilitate other control functions for the zone players, for example, to initiate a downloading command to receive a sound profile from another user or a speaker system. For example, a first user wants to share with a second user his sound profile created specifically for a type of jazz music. The second user can use the controller 270 to access the system (e.g., the system in FIG. 1) of the first user to receive the profile, provided the first user allows. The received profile can be saved and put into effect in the system being used by the second user. As a result, both systems of the first and second users produce substantially similar sound effects when a jazz music is played back.


In operation, when the microcontroller 276 executes one of the application modules 284, the screen driver 274 generates control signals to drive screen 272 to display an application specific user interface accordingly, more of which will be described below.


In the example of FIG. 2C, the controller 270 includes a network interface 280 referred to as a RF interface 280 that facilitates wireless communication with a zone player via a corresponding wireless interface or RF interface thereof. The controller 270 may control one or more zone players, such as 102, 104 and 106 of FIG. 1. Nevertheless, there may be more than one controllers, each preferably in a zone (e.g., a room) and configured to control any one and all of the zone players.


It should be pointed out that the controller 240 in FIG. 2B is not the only controlling device that may practice certain embodiments. Other devices that provide the equivalent control functions (e.g., a computing device, a PDA, a hand-held device, and a laptop computer, etc.) may also be configured to practice certain embodiments. In the above description, unless otherwise specifically described, keys or buttons are generally referred to as either the physical buttons or soft buttons, enabling a user to enter a command or data.



FIG. 3 shows an example interface 300 for a user to create, adjust or update a sound profile. When the profile 300 is saved, various parameters in the profile 301 are updated. When the profile 300 is selected, the parameters are put into use and cause an audio signal to be processed accordingly (e.g., to band-pass certain frequencies). In certain embodiments, a profile 300 may be selected from a plurality of profiles via a controller. In certain embodiments, a profile 300 may be sent from one user or system to another user or system to configure one or more speakers at the receiving system. In certain embodiments, a profile 300 may be requested by a user or system.


As illustrated in the example of FIG. 3, the profile 300 may include a preset name or reference 301, an overall gain 303 (e.g., in decibels (dB)), and one or more speaker component settings 305, 307, 309 (e.g., tweeter, midrange, woofer, and so on). For each component, one or more parameters (e.g., type, frequency e.g., Hertz), quality factor, channel gain (e.g., dB), delay (e.g., samples), phase, limiter, (e.g., threshold, attack (e.g., microseconds), release (e.g., milliseconds), etc.), softclip (e.g., threshold, attack (e.g., microseconds), release (e.g., milliseconds), etc.), and so on, may be specified. Thus, using the example interface of FIG. 3, one or more parameters for one or more settings of a speaker profile 300 may be set. In certain embodiments, the profile 300 may be initialized with factory or default values and modified by a user, software program, and so on via the interface.



FIG. 4 shows a flowchart or process 400 of adjusting a profile to be used in a networked audio device. At block 405, the process 400 begins. At block 410, it is determined whether a playback device is to be logged in. If yes, at block 415, a default page is displayed. If no, then the process 400 continues to check for a playback device (e.g., zone player) to be logged in to the configuration system.


At block 420, one or more settings are adjusted (e.g., frequencies in different bands, and so on). At block 425, it is determined whether the setting(s) are to be saved at the device. If not, the process 400 continues to adjust settings until a desired configuration of settings is reached. If so, at block 430, the setting(s) are saved in a memory in the playback device. Settings may be associated with a name or other identifier (e.g., “Jazz”, “Rock”, “Radio”, and so on). Saved settings may form a speaker profile, for example. Settings may be shared with another, remotely located speaker system via the profile, for example.


At block 435, the playback device is configured based on the setting(s). For example, a profile and/or other stored settings may be selected to configure the playback device accordingly. At block 440, after the playback device is configured, the process 400 ends. The playback device may then be used to playback multimedia content, for example. In certain embodiments, the playback device may be configured or re-configured based on profile settings while multimedia content is being played back.



FIG. 5 shows a flowchart or process of sharing a profile between two remotely separated sound systems. At block 505, the process 500 begins. At block 510, a speaker profile is saved. For example, a Joe's Jazz Profile” sound profile for a playback device or other speaker is configured and saved by a user via an interface, such as the example profile interface 300 of FIG. 3. At block 515, the profile is shared. The profile may be shared with another user, another device, and so on. For example, a copy of the profile may be sent to a user, device, and so on. Alternatively, a link to the profile may be sent to a user, device, and so on.


At block 520, the profile is read. For example, the profile is accessed by a playback device at a location remote from a location at which the profile was created. The playback device, a controller associated with the playback device, or both the controller and the playback device read the profile. At block 525, the playback device is configured based on setting(s) in the profile. For example, a profile and/or other stored settings may be selected to configure the playback device accordingly.


At block 530, it is determined whether the profile is to be saved at the playback device. If so, at block 535, the profile is saved at the playback device. At block 540, after the playback device is configured, the process 500 ends. The playback device may then be used to playback multimedia content, for example. In certain embodiments, the playback device may be configured or re-configured based on profile settings while multimedia content is being played back.


The processes, sequences or steps and features discussed above and in the appendix are related to each other and each is believed independently novel in the art. The disclosed processes and sequences may be performed alone or in any combination to provide a novel and unobvious system or a portion of a system. It should be understood that the processes and sequences in combination yield an equally independently novel combination as well, even if combined in their broadest sense (e.g., with less than the specific manner in which each of the processes or sequences has been reduced to practice in the disclosure herein).


The forgoing and attached are illustrative of various aspects/embodiments of the present invention, the disclosure of specific sequence/steps and the inclusion of specifics with regard to broader methods and systems are not intended to limit the scope of the invention which finds itself in the various permutations of the features disclosed and described herein as conveyed to one of skill in the art.

Claims
  • 1. A computing device comprising: at least one processor; andat least one tangible, non-transitory computer-readable medium comprising program instructions that are executable by the at least one processor such that the computing device is configured to:receive, from a first playback device that is located outside of a local area network (LAN), a first sound profile, wherein the first sound profile corresponds to a sound profile for a particular type of audio content and comprises audio playback equalization parameters for activation by a second playback device that is located inside of the LAN when the second playback device plays the particular type of audio content; andtransmit the first sound profile to the second playback device, wherein after the second playback device has stored the first sound profile and received a command to play the audio content, the second playback device is configured to (i) play the audio content according to the first sound profile when the audio content comprises the particular type of audio content and (ii) play the audio content according to a sound profile other than the first sound profile when the audio content does not comprise the particular type of audio content.
  • 2. The computing device of claim 1, wherein the computing device is located inside the LAN.
  • 3. The computing device of claim 1, wherein the at least one tangible, non-transitory computer-readable medium further comprises program instructions that are executable by the at least one processor such that the computing device is configured to transmit the command to play the audio content to the second playback device.
  • 4. The computing device of claim 1, wherein the audio playback equalization parameters defined in the first sound profile comprise at least one of a band, frequency, equalizer, or gain.
  • 5. The computing device of claim 1, wherein the first sound profile further comprises at least one of a quality factor, delay, phase, limiter, softclip, or release.
  • 6. The computing device of claim 1, wherein the at least one tangible, non-transitory computer-readable medium further comprises program instructions that are executable by the at least one processor such that the computing device is configured to store the first sound profile.
  • 7. The computing device of claim 1, wherein the at least one tangible, non-transitory computer-readable medium further comprises program instructions that are executable by the at least one processor such that the computing device is configured to edit the first sound profile.
  • 8. The computing device of claim 1, wherein the computing device comprises one of a desktop computer, a laptop computer, a tablet, or a smart phone.
  • 9. The computing device of claim 1, wherein the program instructions that are executable by the at least one processor such that the computing device is configured to receive, from the first playback device that is located outside of the LAN, the first sound profile comprise program instructions that are executable by the at least one processor such that the computing device is configured to receive, from the first playback device via a wide area network (WAN), the first sound profile.
  • 10. The computing device of claim 1, wherein the program instructions that are executable by the at least one processor such that the computing device is configured to transmit the first sound profile to the second playback device comprise program instructions that are executable by the at least one processor such that the computing device is configured to transmit the first sound profile to the second playback device via a wide area network (WAN).
  • 11. A method performing by a computing system, the method comprising: receiving, from a first playback device that is located outside of a local area network (LAN), a first sound profile, wherein the first sound profile corresponds to a sound profile for a particular type of audio content and comprises audio playback equalization parameters for activation by a second playback device that is located inside of the LAN when the second playback device plays the particular type of audio content; andtransmitting the first sound profile to the second playback device, wherein after the second playback device has stored the first sound profile and received a command to play the audio content, the second playback device is configured to (i) play the audio content according to the first sound profile when the audio content comprises the particular type of audio content and (ii) play the audio content according to a sound profile other than the first sound profile when the audio content does not comprise the particular type of audio content.
  • 12. The method of claim 11, wherein the computing system is located inside the LAN.
  • 13. The method of claim 11, further comprising: transmitting the command to play the audio content to the second playback device.
  • 14. The method of claim 11, wherein the audio playback equalization parameters defined in the first sound profile comprise at least one of a band, frequency, equalizer, or gain.
  • 15. The method of claim 11, wherein the first sound profile further comprises at least one of a quality factor, delay, phase, limiter, softclip, or release.
  • 16. The method of claim 11, further comprising the computing system storing the first sound profile.
  • 17. The method of claim 11, further comprising the computing system editing the first sound profile.
  • 18. The method of claim 11, wherein the computing system comprises one of a desktop computer, a laptop computer, a tablet, or a smart phone.
  • 19. The method of claim 11, wherein receiving, from the first playback device that is located outside of the LAN, the first sound profile comprises receiving, from the first playback device via a wide area network (WAN), the first sound profile, and wherein transmitting the first sound profile to the second playback device comprises transmitting the first sound profile to the second playback device via a wide area network (WAN).
  • 20. Tangible, non-transitory computer-readable media having program instructions stored therein, wherein the program instructions, when executed by one or more processors, cause a computing system to perform functions comprising: receiving, from a first playback device that is located outside of a local area network (LAN), a first sound profile, wherein the first sound profile corresponds to a sound profile for a particular type of audio content and comprises audio playback equalization parameters for activation by a second playback device that is located inside of the LAN when the second playback device plays the particular type of audio content; andtransmitting the first sound profile to the second playback device, wherein after the second playback device has stored the first sound profile and received a command to play the audio content, the second playback device is configured to (i) play the audio content according to the first sound profile when the audio content comprises the particular type of audio content and (ii) play the audio content according to a sound profile other than the first sound profile when the audio content does not comprise the particular type of audio content.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/676,655, titled “Adjusting a Playback Device,” filed on Aug. 14, 2017, and currently pending; U.S. application Ser. No. 15/676,655 is a continuation of U.S. application Ser. No. 14/552,049, titled “Adjusting a Playback Device,” filed Nov. 24, 2014, and issued on Aug. 15, 2017, as U.S. Pat. No. 9,734,243; U.S. application Ser. No. 14/552,049 is a continuation of U.S. application Ser. No. 13/272,833, titled “Method and Apparatus for Adjusting a Speaker System,” filed Oct. 13, 2011, and issued Dec. 30, 2014, as U.S. Pat. No. 8,923,997; U.S. application Ser. No. 13/272,833 claims priority to U.S. Prov. App. 61/392,918, titled “Method and Apparatus for Adjusting a Loudspeaker, filed Oct. 13, 2010, and currently expired. The entire contents of the Ser. Nos. 15/676,655; 14/552,049; 13/272,833; and 61/392,918 applications are incorporated herein by reference.

US Referenced Citations (155)
Number Name Date Kind
4995778 Bruessel Feb 1991 A
5218710 Yamaki et al. Jun 1993 A
5440644 Farinelli et al. Aug 1995 A
5553147 Pineau Sep 1996 A
5668884 Clair, Jr. et al. Sep 1997 A
5673323 Schotz et al. Sep 1997 A
5761320 Farinelli et al. Jun 1998 A
5910991 Farrar Jun 1999 A
5923902 Nagaki Jul 1999 A
5946343 Schotz et al. Aug 1999 A
6032202 Lea et al. Feb 2000 A
6256554 DiLorenzo Jul 2001 B1
6404811 Cvetko et al. Jun 2002 B1
6469633 Wachter Oct 2002 B1
6522886 Youngs et al. Feb 2003 B1
6611537 Edens et al. Aug 2003 B1
6631410 Kowalski et al. Oct 2003 B1
6674864 Kitamura Jan 2004 B1
6704421 Kitamura Mar 2004 B1
6757517 Chang Jun 2004 B2
6778869 Champion Aug 2004 B2
6916980 Ishida et al. Jul 2005 B2
6931134 Waller, Jr. et al. Aug 2005 B1
7072477 Kincaid Jul 2006 B1
7130608 Hollstrom et al. Oct 2006 B2
7130616 Janik Oct 2006 B2
7143939 Henzerling Dec 2006 B2
7171010 Martin et al. Jan 2007 B2
7236773 Thomas Jun 2007 B2
7251533 Yoon et al. Jul 2007 B2
7295548 Blank et al. Nov 2007 B2
7383036 Kang et al. Jun 2008 B2
7394480 Song Jul 2008 B2
7483538 McCarty et al. Jan 2009 B2
7490044 Kulkarni Feb 2009 B2
7519188 Berardi et al. Apr 2009 B2
7539551 Komura et al. May 2009 B2
7571014 Lambourne Aug 2009 B1
7607091 Song et al. Oct 2009 B2
7630500 Beckman et al. Dec 2009 B1
7630501 Blank et al. Dec 2009 B2
7643894 Braithwaite et al. Jan 2010 B2
7657910 McAulay et al. Feb 2010 B1
7689305 Kreifeldt et al. Mar 2010 B2
7792311 Holmgren et al. Sep 2010 B1
7853341 McCarty et al. Dec 2010 B2
7987294 Bryce et al. Jul 2011 B2
8014423 Thaler et al. Sep 2011 B2
8045952 Qureshey et al. Oct 2011 B2
8054987 Seydoux Nov 2011 B2
8063698 Howard Nov 2011 B2
8103009 McCarty et al. Jan 2012 B2
8131389 Hardwick et al. Mar 2012 B1
8139774 Berardi et al. Mar 2012 B2
8160281 Kim et al. Apr 2012 B2
8175292 Aylward et al. May 2012 B2
8229125 Short Jul 2012 B2
8233632 MacDonald et al. Jul 2012 B1
8234395 Millington Jul 2012 B2
8238578 Aylward Aug 2012 B2
8243961 Morrill Aug 2012 B1
8265310 Berardi et al. Sep 2012 B2
8290185 Kim Oct 2012 B2
8306235 Mahowald Nov 2012 B2
8325935 Rutschman Dec 2012 B2
8331585 Hagen et al. Dec 2012 B2
8391501 Khawand et al. Mar 2013 B2
8423893 Ramsay et al. Apr 2013 B2
8452020 Gregg et al. May 2013 B2
8483853 Lambourne Jul 2013 B1
8521316 Louboutin Aug 2013 B2
8577045 Gibbs Nov 2013 B2
8577048 Chaikin et al. Nov 2013 B2
8600075 Lim Dec 2013 B2
8620006 Berardi et al. Dec 2013 B2
8831761 Kemp et al. Sep 2014 B2
8843224 Holmgren et al. Sep 2014 B2
8849434 Pontual Sep 2014 B1
8855319 Liu et al. Oct 2014 B2
8879761 Johnson et al. Nov 2014 B2
8914559 Kalayjian et al. Dec 2014 B2
8934647 Joyce et al. Jan 2015 B2
8934655 Breen et al. Jan 2015 B2
8965546 Visser et al. Feb 2015 B2
8977974 Kraut Mar 2015 B2
8984442 Pirnack et al. Mar 2015 B2
9020153 Britt, Jr. Apr 2015 B2
9042556 Kallai et al. May 2015 B2
20010042107 Palm Nov 2001 A1
20020022453 Balog et al. Feb 2002 A1
20020026442 Lipscomb et al. Feb 2002 A1
20020072816 Shdema et al. Jun 2002 A1
20020078161 Cheng Jun 2002 A1
20020124097 Isely et al. Sep 2002 A1
20020159607 Ford Oct 2002 A1
20030002689 Folio Jan 2003 A1
20030157951 Hasty, Jr. Aug 2003 A1
20030161479 Yang et al. Aug 2003 A1
20030210796 McCarty et al. Nov 2003 A1
20030215097 Crutchfield, Jr. Nov 2003 A1
20040015252 Aiso et al. Jan 2004 A1
20040024478 Hans et al. Feb 2004 A1
20040234088 McCarty et al. Nov 2004 A1
20040237750 Smith et al. Dec 2004 A1
20050147261 Yeh Jul 2005 A1
20060149402 Chung Jul 2006 A1
20060173972 Jung et al. Aug 2006 A1
20060205349 Passier et al. Sep 2006 A1
20060222185 Dyer et al. Oct 2006 A1
20060229752 Chung Oct 2006 A1
20070003067 Gierl et al. Jan 2007 A1
20070025559 Mihelich et al. Feb 2007 A1
20070142944 Goldberg et al. Jun 2007 A1
20080002839 Eng Jan 2008 A1
20080025535 Rajapakse Jan 2008 A1
20080045140 Korhonen Feb 2008 A1
20080077261 Baudino et al. Mar 2008 A1
20080092204 Bryce et al. Apr 2008 A1
20080144864 Huon et al. Jun 2008 A1
20080152165 Zacchi Jun 2008 A1
20080162668 Miller Jul 2008 A1
20080175411 Greve Jul 2008 A1
20090024662 Park et al. Jan 2009 A1
20090220095 Oh et al. Sep 2009 A1
20090274309 Pedersen Nov 2009 A1
20100052843 Cannistraro Mar 2010 A1
20100142735 Yoon et al. Jun 2010 A1
20100166222 Bongiovi Jul 2010 A1
20100284389 Ramsay et al. Nov 2010 A1
20100299639 Ramsay et al. Nov 2010 A1
20110044476 Burlingame et al. Feb 2011 A1
20110170710 Son Jul 2011 A1
20110299696 Holmgren et al. Dec 2011 A1
20120051558 Kim et al. Mar 2012 A1
20120096125 Kallai et al. Apr 2012 A1
20120127831 Gicklhorn et al. May 2012 A1
20120263325 Freeman et al. Oct 2012 A1
20130010970 Hegarty et al. Jan 2013 A1
20130028443 Pance et al. Jan 2013 A1
20130259254 Xiang et al. Oct 2013 A1
20140016784 Sen et al. Jan 2014 A1
20140016786 Sen Jan 2014 A1
20140016802 Sen Jan 2014 A1
20140023196 Xiang et al. Jan 2014 A1
20140112481 Li et al. Apr 2014 A1
20140219456 Morrell et al. Aug 2014 A1
20140226823 Sen et al. Aug 2014 A1
20140233755 Kim et al. Aug 2014 A1
20140294200 Baumgarte et al. Oct 2014 A1
20140355768 Sen et al. Dec 2014 A1
20140355794 Morrell et al. Dec 2014 A1
20150063610 Mossner Mar 2015 A1
20150146886 Baumgarte May 2015 A1
20150201274 Ellner et al. Jul 2015 A1
20150281866 Williams et al. Oct 2015 A1
Foreign Referenced Citations (8)
Number Date Country
1133896 Aug 2002 EP
1389853 Feb 2004 EP
1825713 Oct 2012 EP
2860992 Apr 2015 EP
200153994 Jul 2001 WO
2003093950 Nov 2003 WO
2013012582 Jan 2013 WO
2015024881 Feb 2015 WO
Non-Patent Literature Citations (38)
Entry
Advisory Action dated Aug. 13, 2021, issued in connection with U.S. Appl. No. 17/129,462, filed Dec. 21, 2020, 5 pages.
AudioTron Quick Start Guide, Version 1.0, Mar. 2001, 24 pages.
AudioTron Reference Manual, Version 3.0, May 2002, 70 pages.
AudioTron Setup Guide, Version 3.0, May 2002, 38 pages.
Bluetooth. “Specification of the Bluetooth System: The ad hoc SCATTERNET for affordable and highly functional wireless connectivity,” Core, Version 1.0 A, Jul. 26, 1999, 1068 pages.
Bluetooth. “Specification of the Bluetooth System: Wireless connections made easy,” Core, Version 1.0 B, Dec. 1, 1999, 1076 pages.
Dell, Inc. “Dell Digital Audio Receiver: Reference Guide,” Jun. 2000, 70 pages.
Dell, Inc. “Start Here,” Jun. 2000, 2 pages.
“Denon 2003-2004 Product Catalog,” Denon, 2003-2004, 44 pages.
Final Office Action dated Oct. 3, 2019, issued in connection with U.S. Appl. No. 15/676,655, filed Aug. 14, 2017, 15 pages.
Final Office Action dated Dec. 11, 2020, issued in connection with U.S. Appl. No. 15/676,655, filed Aug. 14, 2017, 17 pages.
Final Office Action dated Jun. 4, 2021, issued in connection with U.S. Appl. No. 17/129,462, filed Dec. 21, 2020, 18 pages.
Jo et al., “Synchronized One-to-many Media Streaming with Adaptive Playout Control,” Proceedings of SPIE, 2002, pp. 71-82, vol. 4861.
Jones, Stephen, “Dell Digital Audio Receiver: Digital upgrade for your analog stereo,” Analog Stereo, Jun. 24, 2000 http://www.reviewsonline.com/articles/961906864.htm retrieved Jun. 18, 2014, 2 pages.
Louderback, Jim, “Affordable Audio Receiver Furnishes Homes With MP3,” TechTV Vault. Jun. 28, 2000 retrieved Jul. 10, 2014, 2 pages.
Nilsson, M., “ID3 Tag Version 2,” Mar. 26, 1998, 28 pages.
Non-Final Office Action dated Oct. 10, 2018, issued in connection with U.S. Appl. No. 15/676,655, filed Aug. 14, 2017, 15 pages.
Non-Final Office Action dated Oct. 12, 2021, issued in connection with U.S. Appl. No. 17/129,462, filed Dec. 21, 2020, 20 pages.
Non-Final Office Action dated Mar. 15, 2016, issued in connection with U.S. Appl. No. 14/552,049, filed Nov. 24, 2014, 13 pages.
Non-Final Office Action dated Jul. 23, 2014, issued in connection with U.S. Appl. No. 13/272,833, filed Oct. 13, 2011, 10 pages.
Non-Final Office Action dated Aug. 30, 2021, issued in connection with U.S. Appl. No. 15/676,655, filed Aug. 14, 2017, 20 pages.
Non-Final Office Action dated Jul. 7, 2020, issued in connection with U.S. Appl. No. 15/676,655, filed Aug. 14, 2017, 18 pages.
Non-Final Office Action dated Feb. 8, 2021, issued in connection with U.S. Appl. No. 17/129,462, filed Dec. 21, 2020, 17 pages.
Notice of Allowance dated Jun. 9, 2017, issued in connection with U.S. Appl. No. 14/552,049, filed Nov. 24, 2014, 5 pages.
Notice of Allowance dated Feb. 11, 2022, issued in connection with U.S. Appl. No. 17/129,462, filed Dec. 21, 2020, 8 pages.
Notice of Allowance dated Apr. 12, 2022, issued in connection with U.S. Appl. No. 15/676,655, filed Aug. 14, 2017, 7 pages.
Notice of Allowance dated Sep. 13, 2016, issued in connection with U.S. Appl. No. 14/552,049, filed Nov. 24, 2014, 8 pages.
Notice of Allowance dated Nov. 21, 2014, issued in connection with U.S. Appl. No. 13/272,833, filed Oct. 13, 2011, 7 pages.
Palm, Inc., “Handbook for the Palm VII Handheld,” May 2000, 311 pages.
Presentations at WinHEC 2000, May 2000, 138 pages.
True Audio. About WinSpeakerz, Loudspeaker Design Software for Windows 95/98/NT/2000/XP, Jan. 11, 2008, 7 pages.
U.S. Appl. No. 60/490,768, filed Jul. 28, 2003, entitled “Method for synchronizing audio playback between multiple networked devices,” 13 pages.
U.S. Appl. No. 60/825,407, filed Sep. 12, 2006, entitled “Controlling and manipulating groupings in a multi-zone music or media system,” 82 pages.
UPnP; “Universal Plug and Play Device Architecture,” Jun. 8, 2000; version 1.0; Microsoft Corporation; pp. 1-54.
Yamaha DME 32 manual: copyright 2001.
Yamaha DME 64 Owner's Manual; copyright 2004, 80 pages.
Yamaha DME Designer 3.5 setup manual guide; copyright 2004, 16 pages.
Yamaha DME Designer 3.5 User Manual; Copyright 2004, 507 pages.
Related Publications (1)
Number Date Country
20230130787 A1 Apr 2023 US
Provisional Applications (1)
Number Date Country
61392918 Oct 2010 US
Continuations (3)
Number Date Country
Parent 15676655 Aug 2017 US
Child 17822940 US
Parent 14552049 Nov 2014 US
Child 15676655 US
Parent 13272833 Oct 2011 US
Child 14552049 US