The embodiments described herein relate generally to audio systems, and more particularly to systems, methods, and modes for an audio system to be able to monitor and control the volume of an audio source.
It is often the case that the volume of audio can change over time and listeners either need to adjust the volume up, or adjust the volume down. Many people have experienced this problem with commercials when watching television programs or movies at home.
There are many different audio normalizers currently available in the market. The majority of presently available volume adjustment applications implement a change on the incoming audio signal; i.e., analyzing the incoming audio signal and compressing it, thereby limiting the performance, and compromising the signal integrity. There are also ambient sound sensors that monitor changes in the ambient noise levels, but they are intended for set and forget and generally are hidden from plain sight.
Other audio normalizers adjust based on the input (signal compression); Dolby Volume is a good example of this implementation. Other simpler systems only have a microphone that listens to the noise floor in the room and trigger volume up/down based on the average noise floor.
Accordingly, a need has arisen for systems, methods, and modes for an audio system to be able to monitor and control the volume of an audio source.
It is an object of the embodiments to substantially solve at least the problems and/or disadvantages discussed above, and to provide at least one or more of the advantages described below.
It is therefore a general aspect of the embodiments to provide systems, methods, and modes for an audio system to be able to monitor and control the volume of an audio source that will obviate or minimize problems of the type previously described.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Further features and advantages of the aspects of the embodiments, as well as the structure and operation of the various embodiments, are described in detail below with reference to the accompanying drawings. It is noted that the aspects of the embodiments are not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.
According to a first aspect of the embodiments, a method is provided for automatically adjusting an output level of audio in an audio playback network, the method comprising: selecting an audio source to be received into, and output from, an audio playback network; setting a broadcast output volume level of the selected audio source; receiving new audio from any one of a plurality of audio sources, and broadcasting the new audio from one or more loudspeakers; substantially continuously measuring a volume level of the broadcast new audio; and correcting the broadcast new audio to match the set broadcast output volume level.
According to a second aspect of the embodiments, a system is provided for automatically adjusting an output level of audio in an audio playback network, comprising: at least one processor; and a memory operatively connected with the at least one processor, wherein the memory stores computer-executable instructions that, when executed by the at least one processor, causes the at least one processor to execute a method that comprises: selecting an audio source to be received into, and output from, an audio playback network; setting a broadcast output volume level of the selected audio source; receiving new audio from any one of a plurality of audio sources, and broadcasting the new audio from one or more loudspeakers; substantially continuously measuring a volume level of the broadcast new audio; and correcting the broadcast new audio to match the set broadcast output volume level.
The above and other objects and features of the embodiments will become apparent and more readily appreciated from the following description of the embodiments with reference to the following figures. Different aspects of the embodiments are illustrated in reference figures of the drawings. It is intended that the embodiments and figures disclosed herein are to be considered to be illustrative rather than limiting. The components in the drawings are not necessarily drawn to scale, emphasis instead being placed upon clearly illustrating the principles of the aspects of the embodiments. In the drawings, like reference numerals designate corresponding parts throughout the several views.
The embodiments are described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the inventive concept are shown. In the drawings, the size and relative sizes of layers and regions may be exaggerated for clarity. Like numbers refer to like elements throughout. The embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. The scope of the embodiments is therefore defined by the appended claims. The detailed description that follows is written from the point of view of a control systems company, so it is to be understood that generally the concepts discussed herein are applicable to various subsystems and not limited to only a particular controlled device or class of devices, such as audio networks, but can be used in virtually any type of audio playback system.
Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with an embodiment is included in at least one embodiment of the embodiments. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular feature, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The different aspects of the embodiments described herein pertain to the context of systems, methods, and modes for an audio system to be able to monitor and control the volume of an audio source, but is not limited thereto, except as may be set forth expressly in the appended claims.
For 40 years Crestron Electronics Inc., has been the world's leading manufacturer of advanced control and automation systems, innovating technology to simplify and enhance modern lifestyles and businesses. Crestron designs, manufactures, and offers for sale integrated solutions to control audio, video, computer, and environmental systems. In addition, the devices and systems offered by Crestron streamlines technology, improving the quality of life in commercial buildings, universities, hotels, hospitals, and homes, among other locations. Accordingly, the systems, methods, and modes described herein can improve audio systems as discussed below.
The systems, methods, and modes described herein substantially automatically adjust an audio output level from an audio playback network according to aspects of the embodiments. The system herein receive audio to broadcast, or the user selects a type of noise to adjust the audio output level. The user sets the desired level, and then receives broadcast audio from one or more digital and/or analog audio sources. An audio level adjustment device receives the broadcast audio, compares it to the desired level, and transmits corrections, if any, to the audio playback network, to make the corrections in regard to the broadcast output level.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations, specific embodiments, or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made without departing from the spirit or scope of the present disclosure. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims and their equivalents.
While some embodiments will be described in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a personal computer, those skilled in the art will recognize that aspects may also be implemented in combination with other program modules.
The following is a list of the elements of the figures in numerical order:
Used throughout the specification are several acronyms, the meanings of which are provided as follows:
Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those of skill in the art can appreciate that different aspects of the embodiments can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and comparable computing devices. Aspects of the embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
Aspects of the embodiments can be implemented as a computer-implemented process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product can be a computer storage medium readable by a computer system and encoding a computer program that comprises instructions for causing a computer or computing system to perform example process(es). The computer-readable storage medium is a computer-readable memory device. The computer-readable storage medium can for example be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable hardware media.
Throughout this specification, the term “platform” can be a combination of software and hardware components for providing share permissions and organization of content in an application with multiple levels of organizational hierarchy. Examples of platforms include, but are not limited to, a hosted service executed over a plurality of servers, an application executed on a single computing device, and comparable systems. The term “server” generally refers to a computing device executing one or more software programs typically in a networked environment. More detail on these technologies and example operations is provided below.
A computing device, as used herein, refers to a device comprising at least a memory and one or more processors that includes a server, a desktop computer, a laptop computer, a tablet computer, a smart phone, a vehicle mount computer, or a wearable computer. A memory can be a removable or non-removable component of a computing device configured to store one or more instructions to be executed by one or more processors. A processor can be a component of a computing device coupled to a memory and configured to execute programs in conjunction with instructions stored by the memory. Actions or operations described herein may be executed on a single processor, on multiple processors (in a single machine or distributed over multiple machines), or on one or more cores of a multi-core processor. An operating system is a system configured to manage hardware and software components of a computing device that provides common services and applications. An integrated module is a component of an application or service that is integrated within the application or service such that the application or service is configured to execute the component. A computer-readable memory device is a physical computer-readable storage medium implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable hardware media that includes instructions thereon to automatically save content to a location. A user experience can be embodied as a visual display associated with an application or service through which a user interacts with the application or service. A user action refers to an interaction between a user and a user experience of an application or a user experience provided by a service that includes one of touch input, gesture input, voice command, eye tracking, gyroscopic input, pen input, mouse input, and keyboards input. An application programming interface (API) can be a set of routines, protocols, and tools for an application or service that allow the application or service to interact or communicate with one or more other applications and services managed by separate entities.
While example implementations are described using audio networks herein, embodiments are not limited to such applications. For example, aspects of the embodiments can be employed in stand-alone audio systems, such as a room in a building that can play be audio through a dedicated system not connected to any network. As discussed previously, different types of audio can be played back in an audio system and if the audio signal is not advantageously equalized based on the type of loudspeaker, then the quality of the playback can suffer.
Technical advantages exist for automatically monitoring and controlling the volume of an audio source utilizing the aspects of the embodiments that include optimizing the listening experience, and increasing audio fidelity. Such technical advantages can include, but are not limited to, the ability to avoid detrimentally loud audio playback.
Aspects of the embodiments address a need that arises from very large scale of operations created by networked computing and cloud-based services that cannot be managed by humans. The actions/operations described herein are not a mere use of a computer, but address results of a system that is a direct consequence of software used as a service such as audio network communication services offered in conjunction with communications.
While some embodiments will be described in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a personal computer, those skilled in the art will recognize that aspects may also be implemented in combination with other program modules.
Other inputs to DSP 110 can include analog sources 106 (turntables, output of conventional radio sets, and the like), and other digital audio sources 108 (e.g., a compact disk (CD) and digital video disk (DVD) players, and the like). If the received audio data is analog, it will first be converted to a digital audio signal, so that it can be processed by DSP 110.
DSP 110 itself comprises one or more processors 112, memory 114, and Audio Volume Level Control application (App) 116, as well as a significant amount of other software and applications that provide for audio data processing and data manipulation and user interfaces, all of which are known to those of skill in the art, and therefore in fulfillment of the dual purposes of clarity and brevity have not been discussed herein.
Audio data is then sent to equalizer 118, converted to an analog signal through use of one or more digital to analog converters (DACs; not shown), and then amplified by at least one amplifier 120, prior to being broadcast by one or more loudspeakers 122. APB NW 100 further comprises microphone (mic) 124.
In APB NW 100, network connected DSP 110 receives audio data through network 102 or via a separate analog input or digital input. Those of skill in the can appreciate that the audio data source can be from a legacy audio input like an RCA connector (analog input). As described above, audio can be received through network 102 from cloud based streaming audio sources 104, such as a Podcast from an online Podcast service. According to aspects of the embodiments, DSP 110 receives the audio data, buffers the audio data, and then applies equalization settings to the audio data.
APB NW 100 shown in
ALA device 126 can be a wall mounted keypad with one or more microphones 202, amplifier (and filter, not shown) 204, analog-to-digital converter(s) (ADC) 206, processor(s) 208, each with memory 210, and App 116. ALA device 126 further comprises wireless transceiver 214, and at least three controls: volume up button 216, volume down button 218, and volume set button 220.
DSP 110 comprises DSP circuitry 302 (which can include many different sub-components and processing functionality (e.g., ADCs and digital-to-analog converters (DACs)), all of which do not need to be discussed to appreciate the aspects of the embodiments, and therefore, in fulfillment of the dual purposes of clarity and brevity, such detailed discussion has been omitted here from), DSP variable output amplifier (variable Amp) 304, DSP wireless transceiver 306, DSP processor 308, DSP memory 310, within which is stored App 116, and noise generator 314 according to aspects of the embodiments. In DSP 110, variable Amp 304 is controlled by an output from DSP processor 308 to control the level of gain or attenuation, and DSP processor 308 also controls noise generator 314 to control the type of noise that can be generated therefrom.
According to aspects of the embodiments, there are multiple ways to set the volume by a user. In an advanced mode, a test tone such as pink noise or other types of audio noise can be used (e.g., white noise, pink noise, Brownian noise, blue noise, violet noise, and grey noise, the definitions of which are known to those of skill in the art, and will not be repeated here, in fulfillment of the dual purposes of clarity and brevity). According to aspects of the embodiments, in the advanced mode, the user can select and then trigger the desired test tone (if there is more than one noise type available; otherwise, the user would just trigger the test tone), adjust the volume to an acceptable level (by using the volume up/down buttons 216/218, respectively, which causes App 116 to transmit a signal to DSP 110, which receives and processes the same, and then sends an appropriate signal to variable Amp 304) and then hit the volume set button 220 when the appropriate volume level is reached. Once volume set button 220 is pushed, ALA device 126 and App 116 transmits a signal to DSP 110 to terminate the selected noise signal transmission.
In a simple mode, any audio can be fed (e.g., a podcast or music) and the end user would adjust volume to an acceptable level and then hit volume set button 220.
Once the volume is set, ALA device 126 and App 116 work together in tandem with DSP 110, as shown in
In time period 1 (t1−t2), the user is listening to the audio, presumably music, in this non-limiting example. The input audio to DSP 110 is about 0 dB; there is 1 dB of gain in DSP 110, so the output level of the audio from DSP 110 is about 1 dB. There is a loss of about ½ dB between the output of DSP 110 through loudspeaker 122 and the air and through mic 202, plus conversion losses, so that ALA device 126 measures the audio level at about ½ dB (1½=½). The following is an approximate equation for the system shown in
At the end of time period 1, beginning at t2, the user presses set button 816 and the preferred audio level is set, in this non-limiting example, to about ½ dB. The user continues to enjoy the audio for the remainder of time period 2 (t2−t3), until the beginning of time period 3, t3, when the audio level input to DSP 110 inexplicably rises to about 1 dB. Because DSP 110 is still providing 1 dB of gain, the output of DSP 110 is now 2 dB, as graph C illustrates. The audio level heard by the user and input to ALA device 126 is now 1.5 dB, due to the ½ dB loss, as described above.
During time period 3, ALA device 126 is calculating the level of audio and soon determines that the average value has risen, and proceeds to calculate a correction value that will be transmitted to DSP 110. At the end of time period 3, at t4, the correction value is implemented by DSP 110: in this case, to about 0 dB of gain. The audio level drops to ½ dB as was originally set (1−0−½=½). During time period 4 (t4-t5), the audio level is again at its set value.
At time t5, the audio level input to DSP 110 suddenly drops to −1 dB. With 0 dB of gain through DSP 110 the level at the input of ALA 126 and as heard by the user for time period 5 is now about −1.5 dB (−1−0−½=−1.5 dB).
At time t6 ALA device 126 provides a correction value in the manner described above to DSP 110, which is now a gain of +2 dB to yield the set value of ½ dB heard by the user and input to ALA 126: −1+2−½=½ dB.
Method 500 begins with method step 502 in which audio is received by DSP 110 and broadcast to a listener/user (user) by one or more loudspeakers 122.
In method step 504, the user sets a desired volume level based on either the volume of the currently broadcast audio (after adjustment, if needed), or can broadcast one of a plurality of noise signals and then select a volume level.
In method step 506, audio is again received by DSP 110 and broadcast to the user by one or more loudspeakers 122.
In method step 508, ALA device 126 monitors the volume level of the broadcast audio, and if corrections are needed, transmits a correction message to DSP 110, which then applies an appropriate amount of gain or attenuation, as needed. Method 500 then continuously repeats steps 506-510 until the user selects a new volume level and/or ends the listening session of audio.
Thus, according to aspects of the embodiments, ALA device 126 and DSP 110 will provide dynamic real-time corrections to the input audio to maintain a set level experienced by the use, based on determining the spectrum/levels captured during the volume set mode as described above.
Processing device 600 comprises, among other items, a shell/box 666, integrated display/touchscreen 630 (though not used in every application of the computer), internal data/command bus (bus) 608, printed circuit board (PCB) 616, and one or more processors 602, with processor internal memory (internal memory) 604 (which can be typically read only memory (ROM) and/or random access memory (RAM)). Those of ordinary skill in the art can appreciate that in modern computer systems, parallel processing is becoming increasingly prevalent, and whereas a single processor would have been used in the past to implement many or at least several functions, it is more common currently to have a single dedicated processor for certain functions (e.g., digital signal processors) and therefore could be several processors, acting in serial and/or parallel, as required by the specific application. Processing device 600 further comprises multiple input/output ports, such as universal serial bus (USB) ports 620, Ethernet ports 622, and video graphics array (VGA) ports/high definition multimedia interface (HDMI) ports 624, among other types. Further, processing device 600 includes externally accessible drives such as compact disk (CD)/digital versatile disk (DVD) read/write (RW) (CD/DVD/RW) drive 626, and floppy diskette drive 628 (though less used currently, some computers still include this type of interface). Processing device 600 still further includes wireless communication apparatus, such as one or more of the following: Wi-Fi transceiver 632, BlueTooth (BT) transceiver 634, near field communications (NFC) transceiver 636, third generation (3G)/fourth Generation (4G)/long term evolution (LTE)/fifth generation (5G) transceiver (cellular transceiver) 638, communications satellite/global positioning system (satellite) transceiver 640, and antenna 664.
Memory that is located on PCB 616 itself can comprise hard disk drive (HDD) 618 (these can include conventional magnetic storage media, but, as is becoming increasingly more prevalent, can include flash drive memory 654, among other types), ROM 612 (these can include electrically erasable programmable ROM (EEPROMs), ultra-violet erasable PROMs (UVPROMs), among other types), and RAM 614. Usable USB port 620 is flash drive memory 654, and usable with CD/DVD/RW drive 626 are CD/DVD diskettes (CD/DVD) 656 (which can be both read and write-able). Usable with floppy diskette drive 628 are floppy diskettes 658. External memory storage device 652 can be used to store data and programs external to processing device 600, and can itself comprise another HDD 618, flash drive memory 654, among other types of memory storage. External memory storage device 652 is connectable to processing device 600 via USB cable 646. Each of the memory storage devices, or the memory storage media (604, 618, 612, 614, 652, 654, 656, and 658, among others), can contain parts or components, or in its entirety, executable software programming code or application that has been termed App 116 according to aspects of the embodiments, which can implement part or all of the portions of method 500 among other methods not shown, described herein.
In addition to the above described components, Processing device 600 also comprises keyboard 660, external display 662, printer/scanner/fax machine 644, and mouse 642 (although not technically part of the Processing device 600, the peripheral components as shown in
External display 662 can be any type of currently available display or presentation screen, such as liquid crystal displays (LCDs), light emitting diode displays (LEDs), plasma displays, cathode ray tubes (CRTs), among others (including touch screen displays). In addition to the user interface mechanism such as mouse 642, Processing device 600 can further include a microphone, touch pad, joystick, touch screen, voice-recognition system, among other inter-active inter-communicative devices/programs, which can be used to enter data and voice, and which all of are currently available and thus a detailed discussion thereof has been omitted in fulfillment of the dual purposes of clarity and brevity.
As mentioned above, Processing device 600 further comprises a plurality of wireless transceiver devices, such as Wi-Fi transceiver 632, BT transceiver 634, NFC transceiver 636, cellular transceiver 638, satellite transceiver 640, and antenna 664. While each of Wi-Fi transceiver 632, BT transceiver 634, NFC transceiver 636, cellular transceiver 638, and satellite transceiver 640 has their own specialized functions, each can also be used for other types of communications, such as accessing a cellular service provider (not shown), accessing network 610 (which can include the Internet), texting, emailing, among other types of communications and data/voice transfers/exchanges, as known to those of skill in the art. Each of Wi-Fi transceiver 632, BT transceiver 634, NFC transceiver 636, cellular transceiver 638, satellite transceiver 640 includes a transmitting and receiving device, and a specialized antenna, although in some instances, one antenna can be shared by one or more of Wi-Fi transceiver 632, BT transceiver 634, NFC transceiver 636, cellular transceiver 638, and satellite transceiver 640. Alternatively, one or more of Wi-Fi transceiver 632, BT transceiver 634, NFC transceiver 636, cellular transceiver 638, and satellite transceiver 640 will have a specialized antenna, such as satellite transceiver 640 to which is electrically connected at least one antenna 664.
In addition, Processing device 600 can access network 610 (of which the Internet can be part of, as shown and described in
According to further aspects of the embodiments, integrated touch screen display 630, keyboard 660, mouse 642, and external display 662 (if in the form of a touch screen), can provide a means for a user to enter commands, data, digital, and analog information into the Processing device 600. Integrated and external displays 630, 662 can be used to show visual representations of acquired data, and the status of applications that can be running, among other things.
Bus 608 provides a data/command pathway for items such as: the transfer and storage of data/commands between processor 602, Wi-Fi transceiver 632, BT transceiver 634, NFC transceiver 636, cellular transceiver 638, satellite transceiver 640, integrated display 630, USB port 620, Ethernet port 622, VGA/HDMI port 624, CD/DVD/RW drive 626, floppy diskette drive 628, and internal memory 604. Through bus 608, data can be accessed that is stored in internal memory 604. Processor 602 can send information for visual display to either or both of integrated and external displays 630, 662, and the user can send commands to the computer operating system (operating system (OS)) 606 that can reside in internal memory 604 of processor 602, or any of the other memory devices (656, 658, 618, 612, and 614).
Processing device 600, and either internal memories 604, 612, 614, and 618, or external memories 652, 654, 656 and 658, can be used to store computer code that when executed, implements method 500, as well as other methods not shown and discussed, for substantially automatically compensating and controlling a volume level by a user, according to aspects of the embodiments. Hardware, firmware, software, or a combination thereof can be used to perform the various steps and operations described herein. According to aspects of the embodiments, App 116 for carrying out the above discussed steps can be stored and distributed on multi-media storage devices such as devices 618, 612, 614, 654, 656 and/or 658 (described above) or other form of media capable of portably storing information. Storage media 654, 656 and/or 658 can be inserted into, and read by devices such as USB port 620, CD/DVD/RW drive 626, and floppy diskette drive 628, respectively.
As also will be appreciated by one skilled in the art, the various functional aspects of the aspects of the embodiments can be embodied in a wireless communication device, a telecommunication network, or as a method or in a computer program product. Accordingly, aspects of embodiments can take the form of an entirely hardware embodiment or an embodiment combining hardware and software aspects. Further, the aspects of embodiments can take the form of a computer program product stored on a computer-readable storage medium having computer-readable instructions embodied in the medium. Any suitable computer-readable medium can be utilized, including hard disks, CD-ROMs, DVDs, optical storage devices, or magnetic storage devices such a floppy disk or magnetic tape. Other non-limiting examples of computer-readable media include flash-type memories or other known types of memories.
Further, those of ordinary skill in the art in the field of the aspects of the embodiments can appreciate that such functionality can be designed into various types of circuitry, including, but not limited to field programmable gate array structures (FPGAs), application specific integrated circuitry (ASICs), microprocessor based systems, among other types. A detailed discussion of the various types of physical circuit implementations does not substantively aid in an understanding of the aspects of the embodiments, and as such has been omitted for the dual purposes of brevity and clarity. However, the systems and methods discussed herein can be implemented as discussed and can further include programmable devices.
Such programmable devices and/or other types of circuitry as previously discussed can include a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The system bus can be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. Furthermore, various types of computer readable media can be used to store programmable instructions. Computer readable media can be any available media that can be accessed by the processing unit. By way of example, and not limitation, computer readable media can comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile as well as removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROMs, DVDs or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by the processing unit. Communication media can embody computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and can include any suitable information delivery media.
The system memory can include computer storage media in the form of volatile and/or nonvolatile memory such as ROM and/or RAM. A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements connected to and between the processor, such as during start-up, can be stored in memory. The memory can also contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processing unit. By way of non-limiting example, the memory can also include an operating system, application programs, other program modules, and program data.
The processor can also include other removable/non-removable and volatile/nonvolatile computer storage media. For example, the processor can access a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and/or an optical disk drive that reads from or writes to a removable, nonvolatile optical disk, such as a CD-ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM and the like. A hard disk drive can be connected to the system bus through a non-removable memory interface such as an interface, and a magnetic disk drive or optical disk drive can be connected to the system bus by a removable memory interface, such as an interface.
Aspects of the embodiments discussed herein can also be embodied as computer-readable codes on a computer-readable medium. The computer-readable medium can include a computer-readable recording medium and a computer-readable transmission medium. The computer-readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer-readable recording medium include ROM, RAM, CD-ROMs and generally optical data storage devices, magnetic tapes, flash drives, and floppy disks. The computer-readable recording medium can also be distributed over network coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. The computer-readable transmission medium can transmit carrier waves or signals (e.g., wired, or wireless data transmission through the Internet). Also, functional programs, codes, and code segments to, when implemented in suitable electronic hardware, accomplish or support exercising certain elements of the appended claims can be readily construed by programmers skilled in the art to which the aspects of the embodiments pertains.
The disclosed aspects of the embodiments provide a system and method for substantially automatically compensating and controlling a volume level by a user, according to aspects of the embodiments, on one or more processing devices 600 and/or in an audio playback network 100, as shown in
According to aspects of the embodiments, a user of the above described system and method can store App 116 on their processing device 600 as well as mobile electronic device (MED)/personal electronic device (PED) 722 (hereon in referred to as “PEDs 722). PEDs 722 can include, but are not limited to, so-called smart phones, tablets, personal digital assistants (PDAs), notebook and laptop computers, and essentially any device that can access the internet and/or cellular phone service or can facilitate transfer of the same type of data in either a wired or wireless manner.
PED 722 can access cellular service provider 712, either through a wireless connection (cell tower 714) or via a wireless/wired interconnection (a “Wi-Fi” system that comprises, e.g., modulator/demodulator (modem) 702, wireless router 704, internet service provider (ISP) 706, and internet 710 (although not shown, those of skill in the art can appreciate that internet 710 comprises various different types of communications cables, servers/routers/switches 708, and the like, wherein data/software/applications of all types is stored in memory within or attached to servers or other processor based electronic devices, including, for example, App 116 within a computer/server that can be accessed by a user of App 116 on their PED 722 and/or processing device 600). As those of skill in the art can further appreciate, internet 710 can include access to “cloud” computing service(s) and devices, wherein the cloud refers to the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each location being a data center.
Further, PED 722 can include NFC, “Wi-Fi,” and Bluetooth (BT) communications capabilities as well, all of which are known to those of skill in the art. To that end, network 102 further includes, as many homes (and businesses) do, one or more processing devices 600 that can be connected to wireless router 704 via a wired connection (e.g., modem 702) or via a wireless connection (e.g., Bluetooth). Modem 702 can be connected to ISP 706 to provide internet-based communications in the appropriate format to end users (e.g., processing device 600), and which takes signals from the end users and forwards them to ISP 706.
PEDs 722 can also access global positioning system (GPS) satellite 720, which is controlled by GPS station 718, to obtain positioning information (which can be useful for different aspects of the embodiments), or PEDs 722 can obtain positioning information via cellular service provider 712 using cellular tower(s) (cell tower) 714 according to one or more methods of position determination. Some PEDs 722 can also access communication satellites 720 and their respective satellite communication systems control stations 716 (the satellite in
According to further aspects of the embodiments, and as described above, network 102 also contains other types of servers/devices that can include processing device 600, wherein one or more processors, using currently available technology, such as memory, data and instruction buses, and other electronic devices, can store and implement code that can implement the system and method for substantially automatically compensating and controlling a volume level by a user, according to aspects of the embodiments.
According to further aspects of the embodiments, additional features and functions of inventive embodiments are described herein below, wherein such descriptions are to be viewed in light of the above noted detailed embodiments as understood by those skilled in the art.
According to further aspects of the embodiments, additional features and functions of inventive embodiments are described herein below, wherein such descriptions are to be viewed in light of the above noted detailed embodiments as understood by those skilled in the art.
As described above, an encoding process is discussed specifically in reference to
This application may contain material that is subject to copyright, mask work, and/or other intellectual property protection. The respective owners of such intellectual property have no objection to the facsimile reproduction of the disclosure by anyone as it appears in published Patent Office file/records, but otherwise reserve all rights.
It should be understood that this description is not intended to limit the embodiments. On the contrary, the embodiments are intended to cover alternatives, modifications, and equivalents, which are included in the spirit and scope of the embodiments as defined by the appended claims. Further, in the detailed description of the embodiments, numerous specific details are set forth to provide a comprehensive understanding of the claimed embodiments. However, one skilled in the art would understand that various embodiments may be practiced without such specific details.
Although the features and elements of aspects of the embodiments are described being in particular combinations, each feature or element can be used alone, without the other features and elements of the embodiments, or in various combinations with or without other features and elements disclosed herein.
This written description uses examples of the subject matter disclosed to enable any person skilled in the art to practice the same, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims.
The above-described embodiments are intended to be illustrative in all respects, rather than restrictive, of the embodiments. Thus, the embodiments are capable of many variations in detailed implementation that can be derived from the description contained herein by a person skilled in the art. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the embodiments unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items.
All United States patents and applications, foreign patents, and publications discussed above are hereby incorporated herein by reference in their entireties.
To solve the aforementioned problems, the aspects of the embodiments are directed towards systems, methods, and modes for an audio system to be able to monitor and control the volume of an audio source.
Alternate embodiments may be devised without departing from the spirit or the scope of the different aspects of the embodiments.
The present application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional patent application Ser. No. 63/282,320 filed Nov. 23, 2021, the entire contents of which are expressly incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63282320 | Nov 2021 | US |