The present invention relates to an apparatus for acoustically improving an environment, and particularly to an electronic sound screening system.
In order to understand the present invention, it is necessary to appreciate some relevant characteristics of the human auditory system. The following description is based on known research conclusions and data available in handbooks on the experimental psychology of hearing as presented in the discussion in U.S. patent application Ser. No. 10/145,113, incorporated by reference above.
The human auditory system is overwhelmingly complex, both in design and in function. It comprises thousands of receptors connected by complex neural networks to the auditory cortex in the brain. Different components of incident sound excite different receptors, which in turn channel information towards the auditory cortex through different neural network routes.
The response of an individual receptor to a sound component is not always the same; it depends on various factors such as the spectral make up of the sound signal and the preceding sounds, as these receptors can be tuned to respond to different frequencies and intensities.
Masking Principles
Masking is an important and well-researched phenomenon in auditory perception. It is defined as the amount (or the process) by which the threshold of audibility for one sound is raised by the presence of another (masking) sound. The principles of masking are based upon the way the ear performs spectral analysis. A frequency-to-place transformation takes place in the inner ear, along the basilar membrane. Distinct regions in the cochlea, each with a set of neural receptors, are tuned to different frequency bands, which are called critical bands. The spectrum of human audition can be divided into several critical bands, which are not equal.
In simultaneous masking the masker and the target sounds coexist. The target sound specifies the critical band. The auditory system “suspects” there is a sound in that region and tries to detect it. If the masker is sufficiently wide and loud the target sound cannot be heard. This phenomenon can be explained in simple terms, on the basis that the presence of a strong noise or tone masker creates an excitation of sufficient strength on the basilar membrane at the critical band location of the inner ear effectively to block the transmission of the weaker signal.
For an average listener, the critical bandwidth can be approximated by:
where BWc is the critical bandwidth in Hz and f the frequency in Hz.
Also, Bark is associated with frequency f via the following equations:
A masker sound within a critical band has some predictable effect on the perceived detection of sounds in other critical bands. This effect, also known as the spread of masking, can be approximated by a triangular function, which has slopes of +25 and −10 dB per bark (distance of 1 critical band), as shown in accompanying
Principles of the Perceptual Organisation of Sound
The auditory system performs a complex task; sound pressure waves originating from a multiplicity of sources around the listener fuse into a single pressure variation before they enter the ear; in order to form a realistic picture of the surrounding events the listener's auditory system must break down this signal to its constituent parts so that each sound-producing event is identified. This process is based on cues, pieces of information which help the auditory system assign different parts of the signal to different sources, in a process called grouping or auditory object formation. In a complex sound environment there are a number of different cues, which aid listeners to make sense of what they hear.
These cues can be auditory and/or visual or they can be based on knowledge or previous experience. Auditory cues relate to the spectral and temporal characteristics of the blending signals. Different simultaneous sound sources can be distinguished, for example, if their spectral qualities and intensity characteristics, or if their periodicities are different. Visual cues, depending on visual evidence from the sound sources, can also affect the perception of sound.
Auditory scene analysis is a process in which the auditory system takes the mixture of sound that it derives from a complex natural environment and sorts it into packages of acoustic evidence, each probably arising from a single source of sound. It appears that our auditory system works in two ways, by the use of primitive processes of auditory grouping and by governing the listening process by schemas that incorporate our knowledge of familiar sounds.
The primitive process of grouping seems to employ a strategy of first breaking down the incoming array of energy to perform a large number of separate analyses. These are local to particular moments of time and particular frequency regions in the acoustic spectrum. Each region is described in terms of its intensity, its fluctuation pattern, the direction of frequency transitions in it, an estimate of where the sound is coming from in space and perhaps other features. After these numerous separate analyses have been done, the auditory system has the problem of deciding how to group the results so that each group is derived from the same environmental event or sound source.
The grouping has to be done in two dimensions at the least: across the spectrum (simultaneous integration or organization) and across time (temporal grouping or sequential integration). The former, which can also be referred to as spectral integration or fusion, is concerned with the organization of simultaneous components of the complex spectrum into groups, each arising from a single source. The latter (temporal grouping or sequential organization) follows those components in time and groups them into perceptual streams, each arising from a single source again. Only by putting together the right set of frequency components over time can the identity of the different simultaneous signals be recognized.
The primitive process of grouping works in tandem with schema-based organization, which takes into account past learning and experiences as well as attention, and which is therefore linked to higher order processes. Primitive segregation employs neither past learning nor voluntary attention. The relations it creates tend to be valid clues over wide classes of acoustic events. By contrast, schemas relate to particular classes of sounds. They supplement the general knowledge that is packaged in the innate heuristics by using specific learned knowledge.
A number of auditory phenomena have been related to the grouping of sounds into auditory streams, including in particular those related to speech perception, the perception of the order and other temporal properties of sound sequences, the combining of evidence from the two ears, the detection of patterns embedded in other sounds, the perception of simultaneous “layers” of sounds (e.g., in music), the perceived continuity of sounds through interrupting noise, perceived timbre and rhythm, and the perception of tonal sequences.
Spectral integration is pertinent to the grouping of simultaneous components in a sound mixture, so that they are treated as arising from the same source. The auditory system looks for correlations or correspondences among parts of the spectrum, which would be unlikely to have occurred by chance. Certain types of relations between simultaneous components can be used as clues for grouping them together. The effect of this grouping is to allow global analyses of factors such as pitch, timbre, loudness, and even spatial origin to be performed on a set of sensory evidence coming from the same environmental event.
Many of the factors that favor the grouping of a sequence of auditory inputs are features that define the similarity and continuity of successive sounds. These include fundamental frequency, temporal proximity, shape of spectrum, intensity, and apparent spatial origin. These characteristics affect the sequential aspect of scene analysis, in other words the use of the temporal structure of sound.
Generally, it appears that the stream forming process follows principles analogous to the principle of grouping by proximity. High tones tend to group with other high tones if they are adequately close in time. In the case of continuous sounds it appears that there is a unit forming process that is sensitive to the discontinuities in sound, particularly to sudden rises in intensity, and that creates unit boundaries when such discontinuities occur. Units can occur in different time scales and smaller units can be embedded in larger ones.
In complex tones, where there are many frequency components, the situation is more complicated as the auditory system estimates the fundamental frequency of the set of harmonics present in sound in order to determine the pitch. The perceptual grouping is affected by the difference in fundamental frequency pitch) and/or by the difference in the average of partials (brightness) in a sound. They both affect the perceptual grouping and the effects are additive.
A pure tone has a different spectral content than a complex tone; so, even if the pitches of the two sounds are the same, the tones will tend to segregate into different groups from one another. However another type of grouping may take effect: a pure tone may, instead of grouping with the entire complex tone following it, group with one of the frequency components of the latter.
Location in space may be another effective similarity, which influences temporal grouping of tones. Primitive scene analysis tends to group sounds that come from the same point in space and segregate those that come from different places. Frequency separation, rate, and the spatial separation combine to influence segregation. Spatial differences seem to have their strongest effect on segregation when they are combined with other differences between the sounds.
In a complex auditory environment where distracting sounds may come from any direction on the horizontal plane, localization seems to be very important, as disrupting the localization of distracting sound sources can weaken the identity of particular streams.
Timbre is another factor that affects the similarity of tones and hence their grouping into streams. The difficulty is that timbre is not a simple one-dimensional property of sounds. One distinct dimension however is brightness. Bright tones have more of their energy concentrated towards high frequencies than dull tones do, since brightness is measured by the mean frequency obtained when all the frequency components are weighted according to their loudness. Sounds with similar brightness will tend to be assigned to the same stream. Timbre is a quality of sound that can be changed in two ways: first by offering synthetic sound components to the mixture, which will fuse with the existing components; and second by capturing components out of a mixture by offering them better components with which to group.
Generally speaking, the pattern of peaks and valleys in the spectra of sounds affects their grouping. However there are two types of spectra similarity, when two tones have their harmonics peaking at exactly the same frequencies and when corresponding harmonics are of proportional intensity (if the fundamental frequency of the second tone is double that of the first, then all the peaks in the spectrum would be at double the frequency). Available evidence has shown that both forms of spectra similarity are used in auditory scene analysis to group successive tones.
Continuous sounds seem to hold better as a single stream than discontinuous sounds do. This occurs because the auditory system tends to assume that any sequence that exhibits acoustic continuity has probably arisen from one environmental event.
Competition between different factors results in different organizations; it appears that frequency proximities are competitive and that the system tries to form streams by grouping the elements that bear the greatest resemblance to one another. Because of the competition, an element can be captured out of a sequential grouping by giving it a better sound to group with.
The competition also occurs between different factors that favor grouping. For example in a four tone sequence ABXY if similarity in fundamental frequencies favors the groupings AB and XY, while similarity in spectral peaks favors the grouping AX and BY, then the actual grouping will depend on the relative sizes of the differences.
There is also collaboration as well as competition. If a number of factors all favor the grouping of sounds in the same way, the grouping will be very strong, and the sounds will always be heard as parts of the same stream. The process of collaboration and competition is easy to conceptualize. It is as if each acoustic dimension could vote for a grouping, with the number of votes cast being determined by the degree of similarity with that dimension and by the importance of that dimension. Then streams would be formed, whose elements were grouped by the most votes. Such a voting system is valuable in evaluating a natural environment, in which it is not guaranteed that sounds resembling one another in only one or two ways will always have arisen from the same acoustic source.
Primitive processes of scene analysis are assumed to establish basic groupings amongst the sensory evidence, so that the number and the qualities of the sounds that are ultimately perceived are based on these groupings. These groupings are based on rules which take advantage of fairly constant properties of the acoustic world, such as the fact that most sounds tend to be continuous, to change location slowly and to have components that start and end together. However, auditory organization would not be complete if it ended there. The experiences of the listener are also structured by more refined knowledge of particular classes of signals, such as speech, music, animal sounds, machine noises and other familiar sounds of our environment.
This knowledge is captured in units of mental control called schemas. Each schema incorporates information about a particular regularity in our environment. Regularity can occur at different levels of size and spans of time. So, in our knowledge of language we would have one schema for the sound “a”, another for the word “apple”, one for the grammatical structure of a passive sentence, one for the give and take pattern in a conversation and so on.
It is believed that schemas become active when they detect, in the incoming sense data, the particular data that they deal with. Because many of the patterns that schemas look for extend over time, when part of the evidence is present and the schema is activated, it can prepare the perceptual process for the remainder of the pattern. This process is very important for auditory perception, especially for complex or repeated signals like speech. It can be argued that schemas, in the process of making sense of grouped sounds, occupy significant processing power in the brain. This could be one explanation for the distracting strength of intruding speech, a case where schemas are involuntarily activated to process the incoming signal. Limiting the activation of these schemas either by affecting the primitive groupings, which activate them or by activating other competing schemas less “computationally expensive” for the brain reduces distractions.
There are cases in which primitive grouping processes seem not to be responsible for the perceptual groupings. In these cases schemas select evidence that has not been subdivided by primitive analysis. There are also examples that show another capacity: the ability to regroup evidence that has already been grouped by primitive processes.
Our voluntary attention employs schemas as well. For example, when we are listening carefully for our name being called out among many others in a list we are employing the schema for our name. Anything that is being listened for is part of a schema, and thus whenever attention is accomplishing a task, schemas are participating.
It will be appreciated from the above that the human auditory system is closely attuned to its environment, and unwanted sound or noise has been recognized as a major problem in industrial, office and domestic environments for many years now. Advances in materials technology have provided some solutions. However, the solutions have all addressed the problem in the same way, namely: the sound environment has been improved either by decreasing or by masking noise levels in a controlled space.
Conventional masking systems generally rely on decreasing the signal to noise ratio of distracting sound signals in the environment, by raising the level of the prevailing background sound. A constant component, both in frequency content and amplitude, is introduced into the environment so that peaks in a signal, such as speech, produce a low signal to noise ratio. There is a limitation on the amplitude level of such a steady contribution, defined by the user acceptance: a level of noise that would mask even the higher intruding speech signals would probably be unbearable for prolonged periods. Furthermore this component needs to be wide enough spectrally to cover most possible distracting sounds.
In addition, known masking systems are either systems installed centrally in a space permitting the users of the space very limited or no control over their output, or are self-contained systems with limited inputs, if any, that permit only one user situated adjacent to the masking system control of a small number of system parameters.
Accordingly, it is desirable to provide a more flexible system for, and method of, acoustically improving an environment. Such a system based on the principles of human auditory perception described above provide a reactive system capable of inhibiting and/or prohibiting the effective communication of sound that is perceived as noise by means of an output which is variably dependent on the noise. One feature of such a system includes the ability to provide manual adjustment by one or more users using a simple graphical user interface. These users may be local to such a system or remote from it. Another feature of such a flexible system may include automatic adjustment of parameters once the user initially conditions the system parameters. Adjustment of a large number of parameters of such a system, while perhaps increasing the number of inputs, also correspondingly would allow the user to tailor the sound environment of the occupied space to his or her specific preferences.
By way of introduction only, in one embodiment an electronic sound screening system contains a receiver, a converter, an analyser, a processor and a sound generator. Acoustic energy impinges on the receiver and is converted to an electrical signal by the converter. The analyser receives the electrical signal from the receiver, analyzes the electrical signal, and generates data analysis signals in response to the analyzed electrical signal. The processor produces sound signals based on the data analysis signals from the analyser in each of a plurality of frequency bands which correspond to the critical bands of the human auditory system (also known as Bark Scale ranges). The sound generator provides sound based on the sound signals.
In another embodiment, the electronic sound screening system contains a controller that is manually settable that provides user signals based on user selected inputs in addition to the receiver, the converter, the analyser, a processor and the sound generator. In this case, the processor produces sound signals and contains a harmonic brain that forms a harmonic base and system beat. The sound signals are selectable from dependent signals that are set to be dependent upon the received acoustic energy (produced by certain modules within the processor) and independent signals that are set to be independent of the received acoustic energy (produced by other modules within the processor). These modules include, for example, mask the sound functionally and/or harmonically, filter the signals, produce chords, motives and/or arpeggios, control signals and/or use prerecorded sounds.
In another embodiment, the sound signals produced by the processor are selectable from processing signals that are generated by direct processing of the data analysis signals, generative signals that are generated algorithmically and are adjusted by data analysis signals or scripted signals that are predetermined by a user and are adjusted by the data analysis signals.
In another embodiment, in addition to the receiver, the converter, the analyser, a processor and the sound generator, the sound screening system contains a local user interface through which a local user enters local user inputs to change a state of the sound screening system and a remote user interface through which a non-local user enters remote user inputs to change the state of the sound screening system. The interface, such as a web browser, allows one or more users to affect characteristics of the sound screening system. For example, users vote on a particular characteristic or parameter of the sound screening system, the votes are given different weights (in accordance with the distance of the user from the sound screening system for instance) and then averaged to produce the final result that determines how the sound screening system behaves. Local users may be, for example, in the immediate vicinity of the sound screening system while remote users may be farther away. Alternatively, local users can be, say, within a few feet while remote users can be, say, more than about ten feet from the sound screening system. Obviously, these distances are merely exemplary.
In another embodiment, in addition to the receiver, the converter, the analyser, a processor and the sound generator, the sound screening system contains a communication interface through which multiple systems can establish bi-directional communication and exchange signals for synchronizing their sound analysis and response processes and/or for sharing analysis and generative data, thus effectively establishing a sound screening system of larger physical scale.
In another embodiment, the sound screening system employs a physical sound attenuating screen or boundary on which sound sensing and sound emitting components are placed in such a way that they effectively operate primarily on the side of the screen or boundary on which they are positioned and a control system through which a user can select the side of the screen or boundary on which input sound will be sensed and the side of the screen or boundary on which sound will be emitted.
In different embodiments, the sound screening system is operated through computer-executable instructions in any computer readable medium that controls the receiver, the converter, the analyser, a processor, the sound generator and/or the controller.
The foregoing summary has been provided only by way of introduction. Nothing in this section should be taken as a limitation on the following claims, which define the scope of the invention.
The present sound screening system is a highly flexible system using specially designed software architecture containing a number of modules that receive and analyze environmental sound on the one hand and produce sound in real time or near real time on the other. The software architecture and modules provide a platform in which all sound generation subroutines (for easier referencing, all sound producing subroutines—tonal, noise based or otherwise—are referenced as soundsprites) are connected with the rest of the system and to each other. This ensures forward compatibility with soundsprites that might be developed in the future or even soundsprites from independent developers.
Multiple system inputs are also provided. These inputs include user inputs and input analysis data adjusted through mapping. The mapping uses an intercom system that broadcasts specific changing parameters along a particular channel. The channels are received by the various modules within the sound screening system and information is transported along the channels used to control various aspects of the sound screening system. This allows the software architecture and modules to provide a flexible architecture for the sharing of parameters within various parts of the system, to enable, for example, any soundsprite to be responsive to any input analysis data if required, or to any parameter generated from other soundsprites.
The system permits both local and remote control. Local control is control effected in the local environ of the sound screening system, for example, in a workstation within which the sound screening system is disposed or within a few feet of the sound screening system. If one or more remote users desire to control the sound screening system, they are permitted weighed voting as to the user settings commensurate with their location from the sound screening system and/or other variables.
The sound screening system encompasses a specific communication interface enabling multiple systems to communicate with each other and establish a sound screening system of a larger scale, for example covering floor plans of several hundred square feet.
Furthermore, the sound screening system described in the invention uses multiple sound receiving units, for example microphones, and multiple sound emitting units, for example speakers, which may be distributed in space, or positioned on either side of a sound attenuating screen and permits user control as to which combination of sound receiving and sound emitting sources will be active at any one time.
The sound screening system may contain a physical sound screen which may be a wall or screen that is self-contained or housed within another receptacle, for example, as shown and described in the applications incorporated by reference above.
The microphones 12 receive ambient noise from the surrounding environment and convert such noise into electrical signals for supply to the DSP 14. A spectrogram 17 representing such noise is illustrated in
The DSP 14 serves to analyse the electrical signals supplied from the microphones 12 and in response to such analysed signals to generate sound signals for driving the loudspeakers 16. For this purpose, the DSP 14 employs an algorithm, described below with reference to FIGS. 2 to 32.
The Soundscape Base 108 additionally outputs MIDI signals to a MIDI Synthesizer 110 and audio left/right signals to a Mixer 112. The Mixer 112 receives signals from the MIDI Synthesizer 110, a Preset Manager 114, a Local Area Network (LAN) controller 116, and a LAN communicator 118. The Preset Manager 114 also supplies signals to the Soundscape Base 108, the Analyser 104 and the System Input 102. The Preset Manager 114 receives information from the LAN controller 116, LAN communicator 118, and a Preset Calendar 120. The output of the Mixer 112 is fed to speakers 16 as well as used as feedback to the System Input 102 on the one hand and to the Acoustic Echo Canceller 124 on the other.
The signals between the various modules, including those transmitted using channels on the Intercom 122 as well as between local and remote systems, may be transmitted through wired or wireless communication. For example, the embodiment shown permits synchronized operation of multiple reactive sound systems, which may be in physical proximity to each other or not. The LAN communicator 118 handles the interfacing between the local system and remote systems. Additionally, the present system provides the capability for user tuning over a local area network. The LAN Control 116 handles the data exchange between the local system and a specially built control interface accessible via an Internet browser by any user with access privileges. As above, other communication systems can be used, such as wireless systems using Bluetooth protocols.
Internally, as shown only some of the modules can transmit or receive over the Intercom 122. More specifically, the System Input 102, the MIDI Synthesizer 110 and the Mixer 112 are not adjusted by the changing parameters and thus do not make use of the Intercom 122. Meanwhile, the Analyser 104 and Analyser History 106 broadcast various parameters through the Intercom 122 but do not receive parameters to generate the analyzed or stored signals.
The Preset Manager 114, the Preset Calendar 120, the LAN controller 116 and LAN communicator 118, as well as some of the soundsprites in the Soundscape Base 108, as shown in
As
The Soundscape Base 108 is similar to the Tonal Engine and Masker of the applications incorporated by reference, but has a number of different types of soundsprites. The Soundscape Base 108 contains soundsprites that are broken up into three categories: electroacoustic soundsprites are generated by direct processing of the sensed input 130, scripted soundsprites 140 that are predetermined note sequences or audio files that are conditioned by the sensed input, and generative soundsprites 150 that are generated algorithmically or conditioned by the sensed input. The electroacoustic soundsprites 130 produce sound based on the direct processing of the analyzed signals from the Analyser 104 and/or the audio signal from the System Input 102; the remaining soundsprites produce sound generatively by employing user input but can have their output adjusted or conditioned by the analysed signals from the Analyser 104. Each of the soundsprites is able to communicate using the Intercom 122, with all of the soundsprites being able to broadcast and receive parameters to and from the intercom. Similarly, each of the soundsprites is able to be affected by the Preset Manager.
Each of the generative soundsprites 150 produce MIDI signals that are transmitted to the Mixer 112 through the MIDI Synthesizer 110, and each of the electroacoustic soundsprites 130 produce audio signals that are transmitted to the Mixer 112 directly without going through the MIDI Synthesizer 110 or audio signals that are transmitted to the Mixer 112 directly, in addition to producing MIDI signals that are transmitted to the Mixer 112 through the MIDI Synthesizer 110. The scripted soundsprites 140 produce audio signals, but can also be programmed to produce pre-described MIDI sequences transmitted to the Mixer 112 through the MIDI Synthesizer 110.
In addition to the various soundsprites, the Soundscape Base 108 also contains a Harmonic Brain 170, Envelope 172 and Synth Effects 174. The Harmonic Brain 170 provides the beat, the harmonic base, and the harmonic settings to those soundsprites that use such information in generating an output signal. The Envelope 172 provides streams of numerical values that change with a pre-described manner, as input by the user, over a length of time, also input by the user. The Synth FX 174 soundsprite sets the preset of the MIDI Synthesizer 110 effects channel, which is used as the global effects settings for all the outputs of the MIDI Synth 110.
The electroacoustic soundsprites 130 include a functional masker 132, a harmonic masker 134, and a solid filter 136. The scripted soundsprites 140 include a soundfile 144. The generative soundsprites 150 include Chordal 152, Arpeggiation 154, Motive 156, Control 158, and Clouds 160.
The System Input 400 will now be described in more detail, with reference to
The multiplied audio signal is then supplied to an input of Noise Gate 404. The Noise Gate 404 acts as a noise filter, supplying the input signal to an output thereof only if it receives a signal higher than a user-defined noise threshold (again referred to as a user input, or UI). This threshold is supplied to the Noise Gate 404 from the Preset Manager 114. The signal from the Noise Gate 404 then is provided to an input of a Duck Control sub-module 406. The Duck Control sub-module 406, essentially acts as an amplitude feedback mechanism that reduces the level of the signal through it when the system output level rises and the sub-module is activated. As shown, the Duck Control sub-module 406 receives the system output signal from the Mixer 112 and is activated by a user input from the Preset Manager 114. The Duck Control sub-module 406 has settings for the amount by which the input signal level is reduced, how quickly the input signal level is reduced (a lower gradient results in lower output), and the time period over which the output level of the Duck Control sub-module 406 is smoothed.
The signal from the Duck Control sub-module 406 is then passed on to an FFT sub-module 408. The FFT sub-module 408 takes the analog signal input thereto and produces a digital output signal of 256 floating-point values representing an FFT frame for a frequency range of 0 to 11,025 Hz. The FFT vectors represent signal strength in evenly distributed bands 31.25 Hz wide for when the FFT analysis is performed at a sampling rate of 32 kHz with full FFT vectors of 1024 values in length. Of course other setting can also be used. No user input is supplied to the FFT sub-module 408. The digital signal from the FFT sub-module 408 is then supplied to a Compressor sub-module 410. The Compressor sub-module 410 acts as an automatic gain control that supplies the input digital signal as the output signal from the Compressor sub-module 410 when the input signal is lower than a compressor threshold level and multiplies the input digital signal by a factor smaller than 1 (i.e. reduces the input signal) when the input signal is higher than the threshold level to provide the output signal. The compressor threshold level of the Compressor sub-module 410 is supplied as a user input from the Preset Manager 114. If the multiplication factor is set to zero, the level of the output signal is effectively limited to the compressor threshold level. The output signal from the Compressor sub-module 410 is the output signal from the System Input 400. Thus, an analog signal is supplied to an input of the System Input 400 and a digital signal is supplied from an output of the System Input 400.
The digital FFT output signal from the System Input 400 is supplied to the Analyser 500, along with configuration parameters from the Preset Manager 114 and chords from the Harmonic Masker 134, as shown in
The output from the A-weighting sub-module 502 is then supplied to a Preset Level Input Treatment sub-module 504, which contains sub-sub-modules that are similar to some of the modules in the System Input 400. The Preset Level Input Treatment sub-module 504 contains a Gain Control sub-sub-module 504a, a Noise Gate sub-sub-module 504b, and a Compressor sub-sub-module 504c. Each of these sub-sub-modules have similar user input parameters supplied from the Preset Manager 114 as those supplied to the corresponding sub-modules in the System Input 400; a gain multiplier is supplied to the Gain Control sub-sub-module 504a, a noise threshold is supplied to the Noise Gate sub-sub-module 504b, and a compressor threshold and compressor multiplier are supplied to Compressor sub-sub-module 504c. The user inputs supplied to the sub-sub modules are saved as Sound/Response Parameters in the Preset Manager 114.
The FFT data from the A-weighting sub-module 502 is then supplied to a Critical/Preset Band Analyser sub-module 506 and a Harmonic Band Analyser sub-module 508. The Critical/Preset Band Analyser sub-module 506 accepts the incoming FFT vectors representing A-weighted signal strength in 256 evenly distributed bands and aggregates the spectrum values into 25 critical bands on the one hand and into 4 preset selected frequency Bands on the other hand, using a Root Mean Square function. The frequency boundaries of the 25 critical bands are fixed and dictated by auditory theory. Table 1 shows the frequency boundaries uses in this embodiment, but different definitions of the critical bands, following different auditory modeling principles can also be used. The frequency boundaries of the 4 preset selected frequency bands are variable upon user control and are advantageously selected such that they provide useful analysis data for the particular sound environment in which the system might be installed. The preset selected bands are set to contain a combination of entire critical bands, from a single critical band to any combination of all 25 critical bands. Although only four preset selected bands are indicated in
The Critical/Preset Band Analyser sub-module 506 receives detection parameters from the Preset Manager 114. These detection parameters include definitions of the four frequency ranges for the preset selected frequency bands.
The 25 critical band RMS values produced by the Critical/Preset Band Analyser 506 are passed into the Functional Masker 132 and the Peak Detector 510. This is to say that the Critical/Preset Band Analyser sub-module 506 supplies the RMS values of all of the critical bands (lists of 25 members) to the Functional Masker 132. The 4 preset band RMS values are passed to the Peak Detector 510 and are also broadcast over the Intercom 122. In addition, the RMS values for one of the preset bands are supplied to the Analyzer History 106 (relabeled 600 in
The Peak Detector sub-module 510 performs windowed peak detection on each of the critical bands and the preset selected bands independently. For each band, a history of signal level is maintained, and this history is analysed by a windowing function. The start of a peak is categorised by a signal contour having a high gradient and then leveling off; the end of a peak is categorised by the signal level dropping to a proportion of its value at the start of the peak.
The Peak Detector sub-module 510 sub-module 506 receives detection parameters from the Preset Manager 114. These detection parameters include definitions for the peak detection and parameters in addition to a parameter defining the duration of a peak event after it has been detected.
The Peak Detector 510 produces Critical Band Peaks and Preset Band Peaks which are broadcast over the Intercom 122. Also Peaks for one of the Preset Bands are passed to the Analyser History Module 106.
The Harmonic Band Analyser sub-module 508, which also receives the FFT data from the Preset Level Input Treatment sub-module 504, is supplied with information from the Harmonic Masker 134. The Harmonic Masker 134 provides the band center frequencies that correspond to a chord generated by the Harmonic Masker 134. The Harmonic Band Analyser sub-module 508 supplies the RMS values of the harmonic bands determined by the center frequencies to the Harmonic Masker 134. Again, although only six such bands are indicated in
The Analyser History 600 of
The values calculated in the Analyser History 500 are characteristic of the acoustic environment in which an electronic sound screening system is installed. For an appropriately selected preset band, the combination of these values provide a reasonably good signature of the acoustic environment over a period of 24 hrs. This can be a very useful tool for the installation engineer, the acoustic consultant or the sound designer when designing the response of the electronic sound screening system for any particular space; they can recognise the energy and peak patterns characteristic of the space and can design the system output to work with these patterns throughout the day.
The outputs of the Analyser History 500 (each of the RMS averages and peak counts) are broadcast over assigned intercom channels of the Intercom 122.
The outputs from the Analyser 500 are supplied to the Soundscape Base 108. The Soundscape Base 108 generates audio and MIDI outputs using the outputs from the Analyser 500, information received from the Intercom 122 and the Preset Manager 114, and internally generated information. The Soundscape Base 108 contains a Harmonic Brain 700, which, as shown in
The Critical Band RMS from the Critical/Preset Band Analyser sub-module 506 of the Analyser 500 is supplied to the Functional Masker 800, as shown in
The Harmonic Masker 900, shown in
The Voice Group Selector sub-module 904 routes the received frequencies together with the Harmonic Bands RMS values received from the Analyser 500 to either of two VoiceGroups A and B contained in Voice Group sub-modules 906a and 906b. The Voice Group Selector sub-module 904 contains switches 904a and 904b that alternate every time a new list of frequencies is received. Each VoiceGroup contains 6 Voicesets, a number of which (usually between 4 and 6) is activated. Each Voiceset corresponds to a note (frequency) produced in the Create Chord sub-module 902.
An enhanced view of one of the Voicesets 1000 is shown in
The resonant filter voice sub-module 1002 is a filtered noise output. As in the Functional Masker 800, each voice generates two noise outputs: one with a smoothing envelope, one without. In the resonant filter voice sub-module 1002, a noise generator supplies noise to a resonant filter at the center of the band. One of the outputs of the resonant filter is provided to a voice envelope while the other is provided directly, without being subjected to the voice envelope, to an amplifier for adjusting their signal levels. The filter gain, steepness, minimum and maximum band level outputs, enveloped and non-enveloped signal levels, and enveloped signal time are controlled by the user.
The sample player voice sub-module 1004 provides a voice that is based on one or more recorded samples. In the sample player voice sub-module 1004, the center frequency and harmonic RMS are supplied to a buffer player that produces output sound by transposing the recorded sample to the supplied center frequency and regulating its output level according to the received harmonic RMS. The transposition of the recorded sample is effected by adjusting the duration of the recorded sample based on the ratio of the center for the harmonic band to the nominal frequency of the recorded sample. Similar to the noise generator of the resonant filter voice sub-module 1002, one of the outputs from the buffer player is then provided to a voice envelope while the other is provided directly, without being subjected to the voice envelope, to an amplifier for adjusting the signal levels. The sample file, minimum and maximum band level outputs, enveloped and non-enveloped signal levels, and enveloped signal time are controlled by the user.
The MIDI masker voice sub-module 1006 produces control signals for instructing the operation of the MIDI Synthesizer 112. The center frequency and harmonic RMS are supplied to a MIDI note generator, as are a user supplied MIDI voice threshold, an enveloped signal level and an enveloped signal time. The MIDI masker voice sub-module 1006 sends a MIDI instruction to activate a note in any of the harmonic bands when the harmonic RMS overcomes the MIDI voice threshold in that particular band. The MIDI masker voice sub-module 1006 also sends MIDI instructions to regulate the output level of the MIDI voice using the corresponding harmonic RMS. The MIDI instructions for the regulations of the MIDI voice output level are limited to, several, for example 10, instructions per second, in order to limit the number of MIDI instructions per second received by the MIDI synthesiser 110.
The outputs of the resonant filter voice sub-module 1002 and the sample player voice sub-module 1004, as shown in
Turning now to
A view of one of the Arpeggiation and Chordal soundsprites 1100 is shown in
Meanwhile the global beat (gbeat) of the system is supplied to a Rhythmic Pattern Generator sub-module 1106. The Rhythmic Pattern Generator sub-module 1106 is supplied with user inputs so that a rhythmic pattern list is formed comprising 1 and 0 values, with one value generated for every beat. The onset for a note is produced whenever a non-zero value is encountered and the duration of the note is calculated by measuring the time between the current and the next non-zero values, or is used as supplied by the user settings. The onset of the note is transmitted to the Pitch Class filter sub-module 1108 and the duration of the note is passed to the Note Event Generator sub-module 1114.
The Pitch class filter sub-module 1108 receives the Harmonic Base from the Harmonic Brain 170 and user input to determine on which pitchclasses the current soundsprite is activated. If the Harmonic Base pitchclass corresponds to one of the selected pitchclasses, the Pitch class filter sub-module 1108 lets the Onset received by the Rhythmic pattern generator sub-module 1106 to pass through to the Pitch Generator 1104.
The Pitch Generator sub-module 1104 receives the chord list from the Chord Generator sub-module 1102 and the onset of the chord from the Pitch Class filter sub-module 1108 and provides the pitch and the onset as outputs. The Pitch Generator sub-module 1104 is particular for every different type of soundsprite employed.
The Pitch Generator sub-module 1104 of the Arpeggiation Soundsprite 154 stretches the Chord received by the Chord Generator 1102 to the whole midi-pitch spectrum then outputs the pitches selected and the corresponding note onsets. The pitches and note onsets are output, so that at every onset received by the Pitch Class Filter sub-module 1108, a new note of the same Arpeggiation chord is onset.
The Pitch Generator sub-module 1104 of the Chordal SoundSprite 152 transposes the Chord received by the Chord Generator 1102 to the octave band selected by user and then outputs the pitches selected and the corresponding note onsets. The pitches and note onsets are output, so that at every onset received by the Pitch Class Filter sub-module 1108 all the notes belonging to one chord are onset at the same time.
The Pitch Generator sub-module 1104 outputs the pitch to a Pitch Range Filter sub-module 1110, which filters the received pitches so that any pitch that is output is within the range set by the minimum and maximum pitch settings set by the user. The pitches that pass through the Pitch range Filter sub-module 1112 are then supplied to the Velocity Generator sub-module 1112.
The Velocity Generator sub-module 1112 derives the velocity of the note from the onset received from the Pitch Generator sub-module 1104, the pitch received from the Pitch range Filter sub-module 1112 and the settings set by the user and supplies the pitch and the velocity and to the Note Event Generator 1114.
The Note Event Generation sub-module 1114 receives the pitch, the velocity, the duration of the note and the supplied user settings and creates note event instructions, which are sent to the MIDI synthesizer 112.
The Intercom sub-module 1120 is operating within the soundsprite 1100 to route any of the available parameters on the Intercom receive channels to any of the generative parameters of the soundsprite, otherwise set by user settings. The generated parameters within the soundsprite 1100 can then in turn be transmitted over any of the Intercom broadcast channels dedicated to this particular soundsprite.
The Motive soundsprite 158 is similar to the motive voice in the applications incorporated by reference above. Thus, the Motive soundsprite 158 is triggered by prominent sound events in the acoustical environment. An embodiment of the Motive soundsprites 1200 will now be described with reference to
The Pitch Class filter sub-module 1208 performs the same function as the Pitch Class filter sub-module 1108 described above and outputs the onset to the Pitch Generator 1204.
The Pitch Generator sub-module 1204 receives the onset of a note from the Pitch Class filter sub-module 1208 and provides the pitch and the onset as outputs, following user set parameters that regulate the selection of pitches. The user settings are applied as interval probability weightings that describe the probability of a certain pitch to be selected in relation to its tonal distance from the last pitch selected. The user settings applied also include setting of centre pitch and spread, maximum number of small intervals, maximum number of big intervals, maximum number of intervals in one direction and maximum sum of a row in one direction. Within the Pitch Generator sub-module 1204, intervals bigger than or equal to a fifth are considered big intervals and intervals smaller than a fifth are considered small intervals.
The Pitch Generator sub-module 1204 outputs the note pitch to a Harmonic Treatment sub-module 1216 which also receives the Harmonic Base and Harmonic Settings and user settings. The user settings define any of three states of harmonic correction, namely ‘no correction’, ‘harmonic correction’ and ‘snap to chord’. In the case of ‘harmonic correction’ or ‘snap to chord’ user settings also define the harmonic settings to be used and in the case of ‘snap to chord’ they additionally define the minimum and maximum number of notes to snap to in a chord.
When the Harmonic Treatment sub-module 1216 is set to ‘snap to chord’, a chord is created on each new Harmonic Base received from the Harmonic Brain 170, which is used as a grid for adjusting the pitchclasses. For example, in case a ‘major triad’ is selected as the current chord, each pitchclass running through the Harmonic Treatment sub-module 1216 will snap to this chord by being aligned its closest pitchclass contained in the chord.
When the Harmonic Treatment sub-module 1216 is set to ‘harmonic correction’ it is determined how pitchclasses should be altered according to the current harmonic settings. For this setting, the interval probability weightings settings are treated as likeliness percentage values for a specific pitch to pass through. For example, in case the value at table address ‘0’ is ‘100’, pitchclass ‘0’ (midi-pitches 12, 24 etc.) will always pass unaltered. In case the value is on ‘0’, pitchclass ‘0’ will never pass. In case it is ‘50’, pitchclass ‘0’ will pass half of the times on average. In case the currently suggested pitch is higher than the last note and didn't pass through the first time, its pitch is increased by 1 and the new pitch is tried recursively for a maximum of 12 times until it is abandoned.
The Velocity Generator sub-module 1212 receives the Pitch from the Harmonic Treatment sub-module 1216, the Onset from the Pitch Generator 1204 and the settings supplied by user settings and derives the velocity of the note which is output to the Note Event Generator 1214 together with the Pitch of the note.
The Note Event Generator sub-module 1214 receives the pitch, the velocity, the duration of the note and the supplied user settings and creates note event instructions, which are sent to the MIDI synthesizer 112.
The Intercom sub-module 1220 operates within the soundsprite 1200 in a similar fashion described above for the soundsprites 1100.
Turning now to
The Clouds soundsprite 160 creates note events independent of the global beat of the system (gbeat) and the number of beats per minute (bpm) settings from the Harmonic Brain 170.
The Cloud Voice Generator sub-module 1304 accepts user settings and uses an internal mechanism to generate Pitch, Onset and Duration. The user input interface (also called Graphical User Interface or GUI) for the Cloud Voice Generator sub-module 1304 includes a multi-slider object on which different shapes may be drawn which are then interpreted as the density of events between the minimum and maximum time between note events (also called attacks). User settings also define the minimum and maximum times between note events and pitch related information, including center pitch, deviation and minimum and maximum pitch. The generated pitches are passed to a Harmonic Treatment sub-module 1316, which functions as described above for the Harmonic Treatment sub-module 1216 and outputs pitch values to a Velocity Generator sub-module 1312. The Velocity Generator sub-module 1312, the Note Event Generator sub-module 1314 and the Intercom sub-module 1320 also have the same functionality as described earlier.
Turning now to
The Control soundsprite 158 is used to create textures rather than pitches. Data is transmitted to the Control soundsprite 1400 on the Intercom 1416 and from the Harmonic Brain 170.
The Control Voice Generator 1404 creates data for notes of random duration within the range specified by the user with minimum and maximum duration of note events. In between the created notes are pauses of minimum or maximum duration according to user settings. The Control Voice Generator 1404 outputs a pitch to the Harmonic Displacement sub-module 1416, which uses the Harmonic Base provided by the Harmonic Brain 170 and offsets/transposes this by the amount set by the user settings. The Note Event Generator sub-module 1414 and the Intercom sub-module 1420 operate in the same fashion as described above.
The Soundfile soundsprite 144 plays sound files in AIF, WAV or MP3 format, for example, in controlled loops and thus can be directly applied to the Mixer 112 for application to the speakers or other device that transforms the signals into acoustic energy. The sound files may also be stored and/or transmitted in some other comparable format set by the user or adjusted as desired for the particular module or device into which the signals from the Soundfile soundsprite 144 is input. The output of the Soundfile soundsprite 144 can be conditioned using the Analyser 104 and other data received over the Intercom 122.
The solid filter 136 sends audio signals routed to it through an 8-band resonant filter bank. Of course, the number of filters may be altered as desired. The frequencies of the filter bands can be set by either choosing one or more particular pitches from a list of available pitches via user selection on the display or by receiving one or more external pitches through the Intercom 122.
The Intercom 122 will now be described in more detail with reference to
All user parameters, which are set to define the overall response of the algorithm, are stored in presets. These presets can be recalled as required. The loading/saving of parameters from/to preset files is handled by the Preset Manager 114.
In a specific example of parameter setup and sharing, shown in
As shown in
In one example, shown in
The procedure starts by defining a particular frequency band in the Analyser 104. As shown in the uppermost window on the right hand side of the Analyser window in
Next, RMS_A is received and connected to General Velocity. To accomplish this, the user goes to the Arpeggio Generation screen in
To connect the parameter received on the receive Channel of the Intercom Receiver (RMS_A) to the General Velocity parameter of the Arpeggio Soundsprite 154, the user next chooses ‘generalvel’ in the ‘connect to parameter’ drop down menu in the same top section, below the intercom receive selector. The various parameters available for linking are shown in
The linkage between RMS_A and Volume is more clearly shown in
The connections established through the Intercom between the available parameters of the sound screening system 100 is shown in
The GUI is shown in
The main routine section permits selection of the system input, the Analyser, the Analyser History, the Soundscape Base, and the Mixer. The soundsprites section permits selection of the functional and harmonic maskers, various filters, one or more soundfile soundsprites, Chordal, Arpeggiation, Motive, Control, and Clouds. The controls section permits selection of the envelopes and synthesis effects (named ‘Synth FX’), while the utilities section permits selection of a preset calendar that permits automatic activation of one or more presets and recorder to record information as it is entered to the GUI to create a new preset.
As above,
The portion of the Analyser that concerns the main Analysis parameters regarding critical bands and peaks will now be described. In the peak section there is shown peak detection trim and peak event sub-sections. These sub-sections contain numerical and bar formats of the window width 1910 employed in the peak detection process, the trigger height 1912, the release amount 1914, and the decay/sample time 1916, and minimum peak duration 1918 used to generate an event, respectively. These parameters affect the critical band peak analysis described above. The detected Peaks the bar graph on the right of the peak portion. This graph contains 25 vertical sliders, each one corresponding to a critical band. When a peak is detected the slider of the corresponding critical band rises in the graph at a height that corresponds to the energy of the detected peak.
In the portion of the Analyser on the right, user parameters that affect the preset-defined bands are input. A bar graph of the instantaneous output of all of the critical bands is formed above the bars showing the ranges of the four selected RMS bands. The x-axis of the bar graph is frequency and the y-axis is amplitude of the instantaneous signal within each critical band. It should be noted that the x-axis has a resolution of 25, matching the number of the critical bands employed in the analysis. The definition of the preset Bands for the calculation of the preset band RMS values is set by inputs 1920, 1922, 1924 and 1926 which are applied to the bars marked ‘A’, ‘B’, ‘C’ and ‘D’ for the four available preset bands. The user can set the range for each band by adjusting the slider or indicating the low band (starting band) and number of bands in each RMS selection. The corresponding frequencies in Hz are also shown. To the right of the numerical information regarding the RMS band ranges, a history of the values of each of the RMS bands is graphically shown for a desired time period, as is a graph of the instantaneous values of the RMS bands situated below the RMS histories. The RMS values of the harmonic bands based on the center frequencies supplied from the Harmonic Masker 134 are also supplied below the RMS band ranges. The sound screening system may produce a particular output based on the shape of the instantaneous peak spectrum and/or RMS history spectrum shown in the Analyser window. The parameters used for the analysis can be customised for specific types of acoustic environments where the sound screening system in installed, or certain times of the day that the system is in use. The configuration file containing the set parameters can be recalled independently of the sound/response preset and the results of the performed analysis may change considerably the overall response of the system, even if the sound/response preset remains unchanged.
The Analyser History window, shown in
The Soundscape Base window, shown in
The windows containing the settings for the Global Harmonic Progression 2110 and Masterchords 2212 which is one of the five available chord rules used for chord generation are shown in
The Functional Masker window shown in
The Harmonic Masker window shown in
The Motive Soundsprite 156 is shown in
The Clouds Soundsprite 160 is shown in
The Control Soundsprite 158 is shown in
The Soundfile Soundsprite 144 is shown in
The Solid Filer Soundsprite 136 is shown in
The Envelopes soundsprite of the main control panel of
The GUI for the Synth Effects Soundsprite 174 is shown in
The Mixer window shown in
The Preset Calendar window of
In
More specifically, as shown, if N users of distance Ri (for the ith user) from the sound screen are logged into the system and vote on a particular characteristic of the sound screening system (such as volume from the sound screening system), then the value of the characteristic is:
In other embodiments, the directionality of the users as well as distance may be taken into account when determining the particular characteristic. Although only about 20 feet is illustrated as the range over which the user can have a vote, this range is only exemplary. Also, other weighting schemes may be used, such as a scheme that takes into account the distance differently (e.g. 1/R), takes into account other user characteristics, and/or does not take into account distance altogether. For example, a particular user may have an enhanced weighing function because he or she has seniority or is disposed in a location that is affected by sounds from the sound screening system to a larger extent than other locations of the same relative distance from the sound screen.
The physical layout of one embodiment of the sound screening system as well as communication between the user and the sound screening system(s) will now be described in more detail.
The sound screening system also employs a physical sound attenuating screen or boundary on which the sound sensing and sound emitting components are placed in such a way that they effectively operate primarily on the side of the screen or boundary on which they are positioned. The input components can be, for instance, hypercardiod microphones mounted in pairs at a short distance, for example 2 inches, over the top edge of the screen and pointing to opposite directions, so that the one is picking up sound primarily from the one side of the screen and the other from the opposite side of the screen. As another example, the input components can be omnidirectional microphones mounted in pairs in the middle but opposite sides of the screen. Similarly, the output components can be, for instance, pairs of speakers, mounted on opposite side of the screen, emitting sound primarily on the side of the screen on which they are placed.
In one embodiment, the speakers employed are flat panel speakers assembled in pairs as shown in
As shown in
As shown in
The sound screen (also called curtain) can be formed as a single physical curtain installation of any size. The sound screening system has a physical controller (with indicators such as buttons and/or lights) and one or more “carts” containing the electronic components needed. In one implementation, as shown in
The base has a static IP address, but does not know anything about the availability of the carts: it is the responsibility of the carts to periodically send their status to the base. The base does, however, have a list of all possible carts, since the database has a table of carts and their IP addresses, used for manipulating the preset pools and schedules. Different modes of communication may be used. For example, 802.11B communication may be used throughout if the carts use G4 laptops which have onboard 802.11B client facilities. The base computer can be equipped with 802.11 B also. The base system may be provided with a wireless hub.
The curtain may be a single physical curtain with a single cart that has, for example, four channels. Such is the system shown in
The software components of the base can consist of, for example, a Java network/storage program and a Flash application. In this case, the Flash program runs the user interface while the Java program is responsible for network communications and data storage. The Flash and Java programs can communicate via a loopback Transmission Control Protocol (TCP) connection exchanging Extensible Markup Language (XML). The Java program communicates with curtain carts using open sound code (OSC), via user data protocol (UDP) packets. In one embodiment, the protocol is stateless over and above the request/reply cycle. The data storage may use any database, such as an open source database like MySQL, driven from the Java application using Java Database Connectivity (JDBC).
Operation of the software may be either in standalone mode or in conjunction with a base, as discussed above. The software is able to switch dynamically between the two modes, to allow for potential temporary failures of the cart-to-base link, and to allow relocation of a base system as required.
In standalone mode, a system may be controlled solely by a physical front panel. The front panel has a fixed selection of sound presets in the various categories; the “custom” category is populated with a selection of demonstration presets. A standalone system has a limited time sense: a preset can change its behaviour according to time of day or, if desired, a sequence of presets may be programmed according to a calendar. The front panel cycles along presets in response to button presses, and indicates preset selection using on-panel LEDs.
In (base) network mode, the system is essentially stateless; it ignores its internal store of presets and plays a single preset which is uploaded from the base. The system does not act on button presses, except to pass the events to the base. The base is responsible for uploading presets, which the system must then activate. The base also sends messages to update the LEDs on the display. The system degrades operation gracefully on network failure; if the system loses its base, it continues in standalone mode, playing the last preset uploaded from the base indefinitely, but activating local operation of its control panel.
The communication protocol between the base and the cart is such that all requests, in either direction, utilise a simple handshake, even if there is no reply data payload. A failure in the handshake (i.e. no reply) may re-trigger a request, or be used as in indication of temporary network failure. A heartbeat ping from the base to the cart may exist. This is to say that the base may do periodic SQL queries to extract the IP addresses of all possible systems and ping these. New presets may be uploaded and a new preset activated, discarding the current preset. The LED status would then also be uploaded. A system can also be interrogated to determine its tonal base or constrained to a particular tonal base. The pressing of a panel button may be indicated using a particular LED. The cart then expects a new preset in reply. Alternately, the base may be asked for the current preset and LED state, which can be initiated by the cart if it has detected a temporary (and now resolved) failure in the network.
This communication connection between a unit's master cart and one or more slave carts can only operate in the presence of some network topology to allow IP addressing between the carts (which at present means the presence of a base unit). Cart to cart communication allows a large architectural system to be musically coherent across all its output channels. It might also be necessary for the master cart of the system to relay some requests from the base to the slaves, rather than have the base address the slaves directly, if state change or synchronization constraints require it.
More generally, the modules shown and described may be implemented in computer-readable software code that is executed by one or more processors. The modules described may be implemented as a single module or in independent modules. The processor or processors include any device, system, or the like capable of executing computer-executable software code. The code may be stored on a processor, a memory device or on any other computer-readable storage medium. Alternatively, the software code may be encoded in a computer-readable electromagnetic signal, including electronic, electrical and optical signals. The code may be source code, object code or any other code performing or controlling the functionality described in this document. The computer-readable storage medium may be a magnetic storage disk such as a floppy disk, an optical disk such as a CD-ROM, semiconductor memory or any other physical object capable of storing program code or associated data.
Thus, as shown in the figures, a system for communication of multiple devices, either in physical proximity or remotely located is provided. The system establishes Master/Slave relationships between active systems and can force all slave systems to respond according to the master settings. The system also allows for the effective operation of the intercom through the LAN for sharing intercom parameters between different systems.
The sound screening system can respond to external acoustic energy that is either continuous or sporadic using multiple methods. The external sounds can be masked or their disturbing effect can be reduced using, for example, chords, arpeggios or preset sounds or music, as desired. Both, either, or neither the peaks nor RMS values in various critical bands associated with the sounds impinging on the sound screening system may be used to determine the acoustic energy emanating from the sound screening system. The sound screening system can be used to emit acoustic energy when the incident acoustic energy reaches a level to trigger an output from the sound screening system or may emit a continuous output that is dependent on the incident acoustic energy. This is to say that the output is closely related to and thus is adjusted in real-time or near real-time. The sound screening system can be used to emit acoustic energy at various times during a prescribed period whether or not incident acoustic energy reaches a level to trigger an output from the sound screening system. The sound screening system can be partially implemented by components which receive instructions from a computer readable medium or computer readable electromagnetic signal that contains computer-executable instructions for masking the environmental sounds.
It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. For example, the geometries and material properties discussed herein and shown in the embodiments of the figures are intended to be illustrative only. Other variations may be readily substituted and combined to achieve particular design goals or accommodate particular materials or manufacturing processes.
Number | Date | Country | Kind |
---|---|---|---|
GB 9927131.4 | Nov 1999 | GB | national |
GB 0023207.4 | Sep 2000 | GB | national |
This application is a continuation-in-part of U.S. application Ser. No. 10/145,113, filed Feb. 6, 2003 and entitled, “Apparatus for acoustically improving an environment,” which is a continuation of International Application PCT/GB01/04234, with an international filing date of Sep. 21, 2001, published in English under PCT Article 21(2) and U.S. application Ser. No. 10/145,097, filed Jan. 2, 2003 and entitled, “Apparatus for acoustically improving an environment and related method,” which is a continuation-in-part of International Application PCT/GB00/02360, with an international filing date of Jun. 16, 2000, published in English under PCT Article 21(2) and now abandoned. Each of the preceding applications are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/GB01/04234 | Sep 2001 | US |
Child | 10145113 | May 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10145113 | May 2002 | US |
Child | 10996330 | Nov 2004 | US |