The embodiments discussed in the present disclosure are related to multi-zone audio amplifiers.
A building or home may include multiple zones or rooms that include separate audio output devices such as speakers. An audio system may be used to output different audio inputs at the different audio output devices. The audio system may include devices that receive the audio inputs and provide amplification to drive the audio output devices. As a number of the zones increase, multiple amplifiers or devices that provide amplification may be used to provide amplification to the audio output devices.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one example technology area where some embodiments described herein may be practiced.
According to an aspect of an embodiment, a system may include an embedded computing device and amplifiers coupled to the embedded computing device. The embedded computing device may cause a user interface to be displayed on a mobile device via a mobile application. The user interface may show a list of zones and a list of audio inputs. The embedded computing device may obtain a user input via the user interface. The user input may indicate relationships between the audio inputs and the zones. The zones may represent physical locations. Each of the physical locations may include an audio output device (e.g., a speaker) configured to play sound. The embedded computing device may map the relationships between the audio inputs and the zones based on the user input. The amplifiers may obtain a portion of the audio inputs based on the relationships. The amplifiers may also provide the portion of the audio inputs to the audio output device.
The object and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
Example embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
all in accordance with one or more embodiments of the present disclosure.
A multi-zone audio system may be used to simultaneously provide various audio output to different zones (e.g., areas) of a building or home. The multi-zone audio system may permit a single instance of audio content to be provided to multiple zones and/or permit different audio outputs to be provided separately to multiple zones at a same time.
Systems exist today that provide audio outputs to numerous zones in buildings or homes. However, these systems consist of devices that receive audio input from a variety of audio sources and provide amplification of the audio outputs to drive audio output devices. For systems in which the audio output devices are wired to a central location, the systems are complex to install and require the connection and/or configuration of a variety of components. Additionally, these wired systems may consume significant amounts of power and generate a lot of heat. These systems typically use analog signal processing between the audio sources and the audio output devices to select audio sources and relative volume levels for each of the zones. Some systems use wireless speakers or distributed amplifiers, which require their own power supplies, enclosures, signal processing and network interfaces.
According to one or more embodiments of the present disclosure, a multi-zone audio amplifier may receive multiple audio inputs and route the audio inputs to multiple zones, each of the zones include an audio output device, such as a speaker. The multi-zone audio amplifier may receive and process multiple audio inputs and provide the audio inputs to an amplifier such that the audio inputs may be provided to the speakers at an amplified level. As described in detail in the present disclosure, the multi-zone audio amplifier may provide the audio inputs to multiple amplifiers using a single data bus. Such implementations allow the multi-zone audio amplifier to be cost-effective and to perform central management of the audio outputs.
These and other embodiments of the present disclosure will be explained with reference to the accompanying figures. It is to be understood that the figures are diagrammatic and schematic representations of such example embodiments, and are not limiting, nor are they necessarily drawn to scale. In the figures, features with like numbers indicate like structure and function unless described otherwise.
The audio sources 102 may include devices that are connected to the audio amplifier 106 via a network 104. For example, the audio sources 102 may include tablets, computers, smart phones, or any other appropriate wired device or wireless device. Although four audio sources 102 are illustrated, any number of audio sources or devices may be used in association with the audio amplifier 106. For example, more or fewer audio sources 102 may be used in association with the audio amplifier 106.
The network 104 may include any suitable type of network such as a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), a campus area network (CAN), a storage area network (SAN), a wireless local area network (WLAN), a cellular network, a satellite network, or any other network which may receive the audio inputs from the audio sources 102 and provide the audio inputs to the audio amplifier 106. In some embodiments, the network 104 may include a Bluetooth network, Wi-Fi, and Ethernet, although other less common connection methods may also be used.
In some embodiments, the audio inputs may correspond to audio data files representative of songs stored in a digital format, audio recordings created or stored by the audio sources 102, among others. Additionally or alternatively, the audio inputs may represent any audio streaming at the audio sources 102, such as YouTube videos and other multimedia contents from the Internet. The audio sources 102 may receive the audio inputs via online sources or platforms such as Apple Music, Spotify, or any other audio stream providing services or applications that may be operating on the audio sources 102. The audio sources 102 may provide the audio inputs to the audio amplifier 106 via the network 104.
The audio amplifier 106 may route the audio inputs to the different zones 111a-c. For example, the audio amplifier 106 may include an embedded computing device 107 configured to route the audio inputs to different zones 111. For example, the embedded computing device 107 may route the audio inputs to a first zone 111a, a second zone 111b, or a third zone 111c (collectively referred to as “zones 111”). Although three zones 111 are illustrated, the audio amplifier 106 may be associated with any suitable number of zones. In the present disclosure, the zones 111 may include physical locations, such as different rooms in a building or a home, different buildings, or any other appropriate physical location. Each of the zones 111 may include audio output devices 114a-c. For example, the first zone 111a may include a first audio output device 114a, the second zone 111b may include a second audio output device 114b, and the third zone 111c may include a third audio output device 114c. Examples of the audio output devices 114a-c include speakers or any appropriate device configured to play sounds based on the audio inputs. In some embodiments, each audio output device of the audio output devices 114 may include multiple audio output devices. For example, the first audio output device 114a may include a left audio output device corresponding to a left channel of the first zone 111a and a right audio output device corresponding to a right channel of the first zone 111a.
The embedded computing device 107 may include a computing device or a processor configured to process the audio inputs. For example, the embedded computing device 107 may include a microprocessor, microcontroller, a field programmable gate array (FPGA), digital signal processor (DSP), a Raspberry Pi Module, among others. In some embodiments, the embedded computing device 107 may include multiple processors linked together to facilitate distribution and processing of the audio data.
The embedded computing device 107 may route the audio inputs from the audio sources 102 to the zones 111 in various manners. For example, the embedded computing device 107 may route the audio inputs so that each of the zones 111 receives a different audio input. In another example, the embedded computing device 107 may route the audio inputs so that at least two of the zones 111 receive the same audio input.
In some embodiments, operations of the audio amplifier 106 may be controlled based on user input received via a user interface 110. Some example operations may include setting names of the zones 111, creating mapping between the audio inputs and the zones 111, or communicating Wi-Fi credentials for connecting via the network 104, among others.
In some embodiments, the user interface 110 may be implemented via a mobile application 109 running on the mobile device 108. In some embodiments, the mobile device 108 may include any suitable device such as a smartphone, a tablet, a laptop, a personal desktop, among others. The mobile application 109 may permit a user to connect the mobile device 108 to the embedded computing device 107 via the network 104 such that the mobile device may control the embedded computing device 107 and/or operations of the audio amplifier 106.
In some embodiments, the mobile device 108 may be connected to the embedded computing device 107 via a wireless network. The mobile device 108 may cause the embedded computing device 107 to connect to the network 104 by providing network credentials (e.g., Wi-Fi credentials) to the embedded computing device 107. In these and other embodiments, the mobile device 108 and the embedded computing device 107 may establish an initial connection. The mobile device 108 may provide information regarding the connection and the network 104 to the embedded computing device 107 via the initial connection. In some embodiments, the embedded computing device 107 may provide various approaches of establishing the initial connection with the mobile device 108.
The embedded computing device 107 may operate act as a Wi-Fi access point for the mobile device 108. In some embodiments, the mobile application 109 may detect and display, via the user interface 110, the audio amplifier 106 and/or the embedded computing device 107 available to be associated with the mobile application 109 and/or the mobile device 108. The user may provide a first user input indicating the mobile device 108 is to connect to the audio amplifier 106.
The embedded computing device 107 may instruct the mobile application 109 for a password to establish the initial connection. In response to establishing the initial connection, the mobile device 108 may send a first message to the embedded computing device 107. The first message may include an identifier of the network 104. The first message may be sent via any suitable IP-based protocols such as HTTP, TCP, or UDP.
The initial connection between the mobile device 108 and the embedded computing device 107 may be established over Bluetooth. For instance, the embedded computing device 107 may broadcast a Bluetooth endpoint signal via which the mobile device 108 may establish the initial connection. In response to establishing the initial connection via Bluetooth, the mobile device 108 may send the first message.
The mobile device 108 may provide a second message to the embedded computing device 107. The second message may include network credentials (e.g., the Wi-Fi credentials or a security key) for the network 104. The embedded computing device 107 may connect to the mobile device 108 via the network 104 using the security key or the network credentials included in the second message. The second message may be sent via any suitable IP-based protocols such as HTTP, TCP, or UDP.
In some embodiments, the initial connection may be established using both the embedded computing device 107 operating as a Wi-Fi access point connection and via a Bluetooth connection. In these and other embodiments, HTTP may be used as the communication protocol for the Wi-Fi access point connection and GATT may be used as the communication protocol for the Bluetooth connection. In some embodiments, the GATT protocol may include a set of characteristics that can be written to and read. The characteristics of the GATT protocol may permit users of the multi-zone audio amplifier system to get version of the system, get available SSIDs, specify the Wi-Fi access point to connect to, specify the security credentials for the Wi-Fi access point, initiate a connection, and/or get the current status (e.g., disconnected, connecting, connected, etc.) of the Wi-Fi connection. In some embodiments, the embedded computing device 107 may indicate that the mobile device 108 may initially connect via either approach and the mobile device 108 may select one of the approaches. The mobile device 108 may select the approach based on one or more of the following factors: a signal strength, a type of the mobile device 108, the environment 100, or any other appropriate factor).
In some embodiments, the communication between the mobile device 108 and the embedded computing device 107 may use RESTful API requests over HTTP to a server 103 of the embedded computing device 107. In some embodiments, the server 103 may be hosted on the embedded computing device 107. In some embodiments, the server may comprise one of: a HTTP server, a HTTP Secure (HTTPS) server, a Websocket server, among others.
The mobile device 108 may connect to the server 103, in which the server 103 acts as a proxy to facilitate requests from the mobile application 109 to the audio amplifier 106. The server 103 may permit connection issues with respect to the network 104 to be overcome.
In some embodiments, the embedded computing device 107 may implement an authentication and device registration scheme to verify and/or authorize the mobile application to access to the audio amplifier 106. In some embodiments, the authentication process may include associating the user and/or the mobile device 108 with the audio amplifier 106 and storing such associations in a database or the server 103. In some embodiments, the association may be created during an initial set up process of the audio amplifier 106. In some embodiments, additional users or mobile devices may be associated with the audio amplifier 106 after the initial set up process. In some embodiments, with the mobile device 108 may scan a unique identifier (e.g., barcode, QR code, etc.) associated with the audio amplifier 106 to start the authentication and device registration process. In some embodiments, the audio amplifier 106 may be reset, in which previous associations of the audio amplifier 106 are reset.
The mobile application 109 may control the operations of the audio amplifier 106, and particularly the embedded computing device 107, via the user input received via the user interface 110 displayed on the mobile device 108. For example, the embedded computing device 107 may map the audio inputs to the zones 111 based on the user input. In this example, the user input may indicate the mapping relationships between the audio inputs and the zones 111. Examples of the user interface 110 are shown in and described in more detail below in relation to
Modifications, additions, or omissions may be made to the environment 100 without departing from the scope of the present disclosure. For example, in some embodiments, the environment 100 may include any number of other components that may not be explicitly illustrated or described.
With reference to
With reference to
With reference to
With reference to
With reference to
With reference to
At block 802, a user interface of a mobile application may be displayed via a mobile device. The user interface may show an available multi-zone audio amplifier to be associated with the mobile device. In some embodiments, the multi-zone audio amplifier may be available in instances in which the mobile device identifies a signal transmitted by the multi-zone audio amplifier. For example, the multi-zone audio amplifier may be configured to broadcast a Bluetooth signal and/or be available as a Wi-Fi access point, such that the multi-zone audio amplifier may be discoverable to the mobile devices within a certain geographic region. In some embodiments, the mobile device may automatically perform discovery using multicast domain name service (MDNS) discovery. In some embodiments, the discovery may be done automatically.
At block 804, a first user input may be obtained via the user interface. The user input may indicate a connection between the mobile device and the available multi-zone audio amplifier is to be established. For example, the user input may be effective to select the discovered multi-zone audio amplifier, such that the mobile device may initiate the connection process. At block 806 a request to connect may be sent to the available multi-zone audio amplifier. In some embodiments, the request may include information about the mobile device trying to connect to the multi-zone audio amplifier.
At block 808, the connection may be established between the mobile device and an embedded computing device of the multi-zone audio amplifier based on the first user input. In some embodiments, establishing the connection may include establishing an initial connection between the mobile device and the embedded computing device with the embedded computing device acting as a Wi-Fi access point; sending, to the embedded computing device via the initial connection, a first message including an identifier of the local network for the connection; and providing, by the mobile device, a second message to the embedded computing device including the network credentials for the local network.
At block 810, zones associated with the embedded computing device may be displayed on the user interface. The zones may represent physical areas each having at least one speaker to play an output audio. In some embodiments, the embedded computing device may already be associated with a zone. In other embodiments, the embedded computing device may not yet have any associated zones. For example, the user interface may correspond to the user interface 110 of
At block 812, a list of available audio inputs may be displayed on the user interface. The available audio inputs may include audio inputs generated by various audio sources. In some embodiments, an audio input may be provided using the mobile device. At block 814, a second user input may be obtained. The second user input may indicate relationships between the available audio inputs and the zones. At block 816, an audio input of the available audio inputs may be mapped to a corresponding zone of the zones based on the second user input. In these and other embodiments, the mapped audio input may be available to be played by the audio output device of the zone. For instance, the audio input may be audibly played using the speaker in the zone.
Modifications, additions, or omissions may be made to the method 800 without departing from the scope of the present disclosure. For example, one skilled in the art will appreciate that, for this and other processes, operations, and methods disclosed herein, the functions and/or operations performed may be implemented in differing order. Furthermore, the outlined functions and operations are only provided as examples, and some of the functions and operations may be optional, combined into fewer functions and operations, or expanded into additional functions and operations without detracting from the essence of the disclosed embodiments.
The computing system 900 may include a processor 910, a memory 912, and a data storage 914. The processor 910, the memory 912, and the data storage 914 may be communicatively coupled.
In general, the processor 910 may include any suitable special-purpose or general-purpose computer, computing entity, or processing device including various computer hardware or software modules and may be configured to execute instructions stored on any applicable computer-readable storage media. For example, the processor 910 may include a microprocessor, a microcontroller, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a Field-Programmable Gate Array (FPGA), or any other digital or analog circuitry configured to interpret and/or to execute program instructions and/or to process data. Although illustrated as a single processor in
In some embodiments, the processor 910 may be configured to interpret and/or execute program instructions and/or process data stored in the memory 912, the data storage 914, or the memory 912 and the data storage 914. In some embodiments, the processor 910 may fetch program instructions from the data storage 914 and load the program instructions in the memory 912. After the program instructions are loaded into memory 912, the processor 910 may execute the program instructions.
The memory 912 and the data storage 914 may include computer-readable storage media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media may include any available media that may be accessed by a general-purpose or special-purpose computer, such as the processor 910. By way of example, and not limitation, such computer-readable storage media may include tangible or non-transitory computer-readable storage media including Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid state memory devices), or any other storage medium which may be used to store particular program code in the form of computer-executable instructions or data structures and which may be accessed by a general-purpose or special-purpose computer. Combinations of the above may also be included within the scope of computer-readable storage media. Computer-executable instructions may include, for example, instructions and data configured to cause the processor 910 to perform a certain operation or group of operations.
The user interface unit 920 may include any device to allow a user to interface with the computing system 900. For example, the user interface unit 920 may include a mouse, a track pad, a keyboard, buttons, camera, and/or a touchscreen, among other devices. The user interface unit 920 may receive input from a user and provide the input to the processor 910.
Modifications, additions, or omissions may be made to the computing system 900 without departing from the scope of the present disclosure. For example, in some embodiments, the computing system 900 may include any number of other components that may not be explicitly illustrated or described.
Terms used in the present disclosure and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including, but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes, but is not limited to,” etc.).
Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations.
In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” or “one or more of A, B, and C, etc.” is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc. Additionally, the use of the term “and/or” is intended to be construed in this manner.
Further, any disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” should be understood to include the possibilities of “A” or “B” or “A and B” even if the term “and/or” is used elsewhere.
All examples and conditional language recited in the present disclosure are intended for pedagogical objects to aid the reader in understanding the present disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the present disclosure.
This patent application claims priority to U.S. Provisional Application No. 63/537,159 filed Sep. 7, 2023, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63537159 | Sep 2023 | US |