The present application relates to a method and apparatus for disrupting speech and more specifically, a method and apparatus for disrupting speech from a single talker or multiple talkers.
Office environments have become less private. Speech generated from a talker in one part of the office often travels to a listener in another part of the office. The clearly heard speech often distracts the listener, potentially lowering the listener's productivity. This is especially problematic when the subject matter of the speech is sensitive, such as patient information or financial information.
The privacy problem in the workplace has only worsened with the trend in office environments for open spaces and increased density of workers. Many office environments shun traditional offices with four walls in favor of cubicles or conference rooms with glass walls. While these open spaces may facilitate interaction amongst coworkers, speech more easily travels leading to greater distraction and less privacy.
There have been attempts to combat the noise problem. The typical solution is to mask or cover-up the noise problem with “white” or “pink” noise. White noise is a random noise that contains an equal amount of energy per frequency band. Pink noise is noise having higher energy in the low frequencies. However, masking or covering-up the speech in the workplace is either ineffective (because the volume is too low) or overly distracting (because the volume must be very high to disrupt speech). Thus, the current solutions to solve the noise problem in the workplace are of limited effectiveness.
A system and method for disrupting speech of a talker at a listener in an environment is provided. The system and method comprise determining a speech database, selecting a subset of the speech database, forming at least one speech stream from the subset of the speech database, and outputting at least one speech stream.
In one aspect of the invention, any one, some, or all of the steps may be based on a characteristic or multiple characteristics of the talker, the listener, and/or the environment. Modifying any one of the steps based on characteristics of the talker, listener, and/or environment enables varied and powerful systems and methods of disrupting speech. For example, the speech database may be based on the talker (such as by using the talker's voice to compile the speech database) or may not be based on the talker (such as by using voices other than the talker, for example voices that may represent a cross-section of society). For a database based on the talker, the speech in the database may include fragments generating during a training mode and/or in real-time. As another example, the speech database may be based both on the talker and may not be based on the talker (such as a database that is a combination of the talker's voice and voices other than the talker). Moreover, once the speech database is determined, the selection of the subset of the speech database may be based on the talker. Specifically, vocal characteristics of the talker, such as fundamental frequency, formant frequencies, pace, pitch, gender, and accent, may be determined. These characteristics may then be used to select a subset of the voices in the speech database, such as by selecting voices from the database that have similar characteristics to the characteristics of the talker. For example, in a database comprised of voices other than the talker, the selection of the subset of the speech database may comprise selecting speech (such as speech fragments) that have the same or the closest characteristics to speech of the talker.
Once selected, the speech (such as the speech fragments) may be used to generate one or more voice streams. One way to generate the voice stream is to concatenate speech fragments. Further multiple voice streams may be generated by summing individual voice streams, with the summed individual voice streams being output on loudspeakers positioned proximate to or near the talker's workspace and/or on headphones worn by potential listeners. The multiple voice streams may be composed of fragments of the talker's own voice or fragments not of the talker's own voice. A listener listening to sound emanating from the talker's workspace may be able to determine that speech is emanating from the workspace, but unable to separate or segregate the sounds of the actual conversation and thus lose the ability to decipher what the talker is saying. In this manner, the privacy apparatus disrupts the ability of a listener to understand the source speech of the talker by eliminating the segregation cues that humans use to interpret human speech. In addition, since the privacy apparatus is constructed of human speech sounds, it may be better accepted by people than white noise maskers as it sounds like the normal human speech found in all environments where people congregate. This translates into a sound that is much more acceptable to a wider audience than typical privacy sounds.
In another aspect, the disrupting of the speech may be for single talker or multiple talkers. The multiple talkers may be speaking in a conversation (such as asynchronous speaking where one talker to the conversation speaks and then a second talker to the conversation speaks or simultaneously when both talkers speak at the same time) or may be speaking serially (such as a first talker speaking in an office, leaving the office, and the second talker speaking in the office). In either manner, the system and method may determine characteristics of one, some, or all of the multiple talkers and determine a signal for disruption of the speech of the multiple talkers based on the characteristics.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
A privacy apparatus is provided that adds a privacy sound into the environment that may closely match the characteristics of the source (such as the one or more persons speaking), thereby confusing listeners as to which of the sounds is the real source. The privacy apparatus may be based on a talker's own voice or may be based on other voices. This permits disruption of the ability to understand the source speech of the talker by eliminating segregation cues that humans use to interpret human speech. The privacy apparatus reduces or minimizes segregation cues. The privacy apparatus may be quieter than random-noise maskers and may be more easily accepted by people.
A sound can overcome a target sound by adding a sufficient amount of energy to the overall signal reaching the ear to block the target sound from effectively stimulating the ear. The sound can also overcome cues that permit the human auditory system segregate the sources of different sounds without necessarily being louder than the target sounds. A common phenomenon of the ability to segregate sounds is known as the “cocktail party effect.” This effect refers to the ability of people to listen to other conversations in a room with many different people speaking. The means by which people are able to segregate different voices will be described later.
The privacy apparatus may be used as a standalone device, or may be used in combination with another device, such as a telephone. In this manner, the privacy apparatus may provide privacy for a talker while on the telephone. A sample of the talker's voice signal may be input via a microphone (such as the microphone used in the telephone handset or another microphone) and scrambled into an unintelligible audio stream for later use to generate multiple voice streams that are output over a set of loudspeakers. The loudspeakers may be located locally in a receptacle containing the physical privacy apparatus itself and/or remotely away from the receptacle. Alternatively, headphones may be worn by potential listeners. The headphones may output the multiple voice streams so that the listener may be less distracted by the sounds of the talker. The headphones also do not significantly raise the noise level of the workplace environment. In still another embodiment, loudspeakers and headphones may be used in combination.
Referring to
As shown at block 110, the speech fragment database is determined. The database may comprise any type of memory device (such as temporary memory (e.g., RAM) or more permanent memory (e.g., hard disk, EEPROM, thumb drive)). As discussed below, the database may be resident locally (such as a memory connected to a computing device) or remotely (such as a database resident on a network). The speech fragment database may contain any form that represents speech, such as an electronic form of .wav file that, when used to generate electrical signals, may drive a loudspeaker to generate sounds of speech. The speech that is stored in the database may be generated based on a human being (such as person speaking into a microphone) or may be simulated (such as a computer simulating speech to create “speech-like” sounds). Further, the database may include speech for a single person (such as the talker whose speech is sought to be disrupted) or may include speech from a plurality of people (such as the talker and his/her coworkers, and/or third-parties whose speech represents a cross-section of society).
The speech fragment database may be determined in several ways. The database may be determined by the system receiving speech input, such as a talker speaking into a microphone. For example, the talker whose speech is to be disrupted may, prior to having his/her speech disrupted, initialize the system by providing his/her voice input. Or, the talker whose speech is to be disrupted may in real-time provide speech input (e.g., the system receives the talker's voice just prior to generating a signal to disrupt the talker's speech). The speech database may also be determined by accessing a pre-existing database. For example, sets of different types of speech may be stored in a database, as described below with respect to
When the system receives speech input, the system may generate fragments in a variety of ways. For example, fragments may be generated by breaking up the input speech into individual phoneme, diphone, syllable, and/or other like speech fragments. An example of such a routine is provided in U.S. application Ser. No. 10/205,328 (U.S. Patent Publication 2004-0019479), herein incorporated by reference in its entirety. The resulting fragments may be stored contiguously in a large buffer that can hold multiple minutes of speech fragments. A list of indices indicating the beginning and ending of each speech fragment in the buffer may be kept for later use. The input speech may be segmented using phoneme boundary and word boundary signal level estimators, such as with time constants from 10 ms to 250 ms, for example. The beginning/end of a phoneme may be indicated when the phoneme estimator level passes above/below a preset percentage of the word estimator level. In addition, in one aspect, only an identified fragment that has a duration within a desired range (e.g., 50-300 ms) may be used in its entirety. If the fragment is below the minimum duration, it may be discarded. If the fragment is above the maximum duration, it may be truncated. The speech fragment may then be stored in the database and indexed in a sample index.
As another example, fragments may be generated by selecting predetermined sections of the speech input. Specifically, clips of the speech input may be taken to form the fragments. In a 1 minute speech input, for example, clips ranging from 30 to 300 ms may be taken periodically or randomly from the input. A windowing function may be applied to each clip to smooth the onset and offset transitions (5-20 ms) of the clip. The clips may then be stored as fragments.
Block 110 of
Further, the database may store single or multiple speech streams. The speech streams may be based on the talker's input or based on third party input. For example, the talker's input may be fragmented and multiple streams may be generated. In the clip example discussed above, a 2 minute input from a talker may generate 90 seconds of clips. The 90 seconds of clips may be concatenated to form a speech stream totaling 90 seconds. Additional speech streams may be formed by inserting a delay. For example, a delay of 20 seconds may create additional streams (i.e., a first speech stream begins at time=0 seconds, a second speech stream begins at time=20 seconds, etc.). The generated streams may each be stored separately in the database. Or the generated streams may be summed and stored in the database. For example, the streams may be combined to form two separate signals. The two signals may then be stored in the database in any format, such as an MP3 format, for play as stereo on a stationary or portable device, such as a cellphone or an portable digital player or other iPod® type device.
As shown at block 120, speech fragments are selected. The selection of the speech fragments may be performed in a variety of ways. The speech fragments may be selected as a subset of the speech fragments in the database or as the entire set of speech fragments in the database. The database may, for example, include: (1) the talker's speech fragments; (2) the talker's speech fragments and speech fragments of others (such as co-workers of the talker or other third parties); or (3) speech fragments of others. To select less than the entire database, the talker's speech fragments, some but not all of the sets of speech fragments, or the talker's speech fragments and some but not all of the sets of speech fragments may be selected. Alternatively, all of the speech fragments in the database may be selected (e.g., for a database with only a talker's voice, select the talker's voice; or for a database comprising multiple voices, select all of the multiple voices). The discussion below provides the logic for determining what portions of the database to select.
As shown at block 130, the speech stream is formed. As discussed in more detail below, the speech streams may be formed from the fragments stored in the database. However, if the speech streams are already stored in the database, the speech streams need not be recreated. As shown at block 140, the speech streams are output.
Any one, some, or all of the steps shown in
For the four steps depicted in
Characteristics of the talker(s) may include: (1) the voice of the talker (e.g., a sample of the voice output of the talker); (2) the identity of the talker (e.g., the name of the talker); (3) the attributes of the talker (e.g., the talker's gender, age, nationality, etc.); (4) the attributes of the talker's voice (e.g., dynamically analyzing the talker's voice to determine characteristics of the voice such as fundamental frequency, formant frequencies, pace, pitch, gender (voice tends to sound more male or more female), accent etc.); (5) the number of talkers; (6) the loudness of the voice(s) of the talker(s). Characteristics of the listener(s) may include: (1) the location of the listener(s) (e.g., proximity of the listener to the talker); (2) the number of listener(s); (3) the types of listener(s) (e.g., adults, children, etc.); (4) the activity of listener(s) (e.g., listener is a co-worker in office, or listener is a customer in a retail setting). Characteristics of the environment may include: (1) the noise level of the talker(s) environment; (2) the noise level of the listener(s) environment; (3) the type of noise of the talker(s) environment (e.g., noise due to other talkers, due to street noise, etc.); (4) the type of noise of the listener(s) environment (e.g., noise due to other talkers, due to street noise, etc.); etc.
For block 110, determining the speech fragment database may be modified or non-modified. For example, the speech fragment database may be determined in a modified manner by basing the database on the talker's own voice (such as by inputting the talker's voice into the database) or attributes of the talker's voice, as discussed in more detail with respect to
For block 120, selecting the speech fragments may be modified or non-modified. For example, the system may learn a characteristic of the talker, such as the identity of the talker or properties of the talker's voice. The system may then use the characteristic(s) to select the speech fragments, such as to choose a subset of the voices from the database depicted in
For block 130, forming the speech stream may be modified or non-modified. For block 140, outputting the speech streams may be modified or non-modified. For example, the system may output the speech streams based on a characteristic of the talker, listener, and/or environment. Specifically, the system may select a volume for the output based on the volume of the talker. As another example, the system may select a predetermined volume for the output that is not based on the volume of the talker.
Moreover, any one, some, or all of the steps in
For block 120 (selecting the speech fragments), the system may transition from non-modified to modified. For example, before a system learns the characteristics of the talker, listener, and/or environment, the system may select the speech fragments in a non-modified manner (e.g., selecting speech fragments regardless of any characteristic of the talker). As the system learns more about the talker (such as the identity of the talker, the attributes of the talker, the attributes of the talker's voice, etc.), the system may tailor the selection of the speech fragments.
For block 130 (speech stream formation), the system may transition from non-modified to modified. For example, before a system learns the number of talkers, the system may generate a predetermined number of speech streams (such as four speech streams). After the system determines the number of talkers, the system may tailor the number of speech streams formed based on the number. For example, if more than one talker is identified, a higher number of speech streams may be formed (such as twelve speech streams).
For block 140 (output of speech streams), the system may transition from non-modified to modified. For example, before a system learns the environment of the talker and/or listener, the system may generate a predetermined volume for the output. After the system determines the environment of the talker and/or listener (such as background noise, etc.), the system may tailor the output accordingly, as discussed in more detail below. Or, the system may generate a predetermined volume that is constant. Instead of the system adjusting its volume to the talker (as discussed above), the talker may adjust his or her volume based on the predetermined volume.
Further, any one, some, or all of the steps in
As discussed in more detail below with respect to
In addition, any one, some, or all of the steps in
In block 110, the determining of the speech fragment database may be different for a single talker as opposed to multiple talkers. For example, the speech fragment database for a single talker may be based on speech of the single talker (e.g., set of speech fragments based on speech provided by the single talker) and the speech fragment database for a multiple talkers may be based on speech of the multiple talkers (e.g., multiple sets of speech fragments, each of the multiple sets being based on speech provided by one of the multiple talkers). In block 120, the selecting of the speech fragments may be different for a single talker as opposed to multiple talkers, as described below with respect to
Referring to
As shown at block 220, the speech fragments are selected based on the talker input. For input comprising the talker's voice, the speech fragments may comprise phonemes, diphones, and/or syllables from the talker's own voice. Or, the system may analyze the talker's voice, and analyze various characteristics of the voice (such as fundamental frequency, formant frequencies, etc.) to select the optimum set of speech fragments. In a server based system, the server may perform the analysis of the optimum set of voices, compile the voice streams, generate a file (such as an MP3 file), and download the file to play on the local device. In this manner, the intelligence of the system (in terms of selecting the optimum set of speech fragments and generating the voice streams) may be resident on the server, and the local device may be responsible only for outputting the speech streams (e.g., playing the MP3 file). For input comprising attributes of the talker, the attributes may be used to select a set of speech fragments. For example, in an internet-based system, the talker may send via the internet to a server his or her attributes or actual speech recordings. The server may then access a database containing multiple sets of speech fragments (e.g., one set of speech fragments for a male age 15-20; a second set of speech fragments for female age 15-20; a third set of speech fragments for male age 20-25; etc.), and select a subset of the speech fragments in the database based on talker attributes (e.g., if the talker attribute is “male,” the server may select each set of speech fragments that are tagged as “male”).
As shown at block 230, the speech fragments are deployed and/or stored. Depending on the configuration of the system (i.e., whether the system is a standalone or distributed system), the speech fragments may be deployed and/or stored. In a distributed system, for example, the speech fragments may be deployed, such as by sending the speech fragments from a server to the talker via the internet, via a telephone, via an e-mail, or downloaded to a thumb-drive. In a standalone system, the speech fragments may be stored in a database of the standalone system.
Alternatively, the speech fragments may be determined in a non-modified manner. For example, the speech fragment database may comprise a collection of voice samples from individuals who are not the talker. An example of a collection of voice samples is depicted in database 400 in
As discussed above, the system may be for a single user or for multiple users. In a multi-user system, the speech fragment database may include speech fragments for a plurality of users. The database may be resident locally on the system (as part of a standalone system) or may be a network database (as part of a distributed system). A modified speech fragment database 300 for multiple users is depicted in
As discussed above, the system may tailor the system for multiple users (either multiple users speaking serially or multiple users speaking simultaneously). For example, the system may tailor for multiple talkers who speak one after another (i.e., a first talker enters an office, engages the system and leaves, and then a second talker enters an office, engages the system and then leaves). As another example, the system may tailor for multiple talkers who speak simultaneously (i.e., two talkers having a conversation in an office). Further, the system may tailor selecting of the speech fragments in a variety of ways, such as based on the identity of the talker(s) (see
Referring to
Referring to
Referring to
Referring to
The various parameters may be weighted based on relative importance. The weighting may be determined by performing voice privacy performance tests that systematically vary the voices and measure the resulting performance. From this data, a correlation analysis may be performed to compute the optimum relative weighting of each speech property. Once these weightings are known, the best voices may be determined using a statistical analysis, such as a least-squares fit or similar procedure.
An example of a database is shown in
One process for determining the required range and resolution of the parameters is to perform voice privacy performance listening tests while systematically varying the parameters. One talker's voice with known parameters may be chosen as the source. Other voices with known parameters may be chosen as base speech to produce voice privacy. The voice privacy performance may be measured, then new voices with parameters that are quantifiably different from the original set are chosen and tested. This process may be continued until the performance parameter becomes evident. Then, a new source voice may be chosen and the process is repeated to verify the previously determined parameter.
A specific example of this process comprises determining the desired range and resolution of the fundamental pitch frequency (f0) parameter. The mean and standard deviation of male f0 is 120 Hz and 20 Hz, respectively. Voice recordings are obtained whose f0 span the range from 80 Hz to 160 Hz (2 standard deviations). A source voice is chosen with an f0 of 120 Hz. Four jamming voices may be used with approximately 10 Hz spacing between their f0. Voice privacy performance tests may be run with different sets of jamming voices with two of the f0s below 120 Hz and two above. The difference between the source f0 and the jamming f0s may be made smaller and the performance differences noted. These tests may determine how close the jamming f0s can be to a source voice f0 to achieve a certain level of voice privacy performance. Similarly, the jamming ID spacing may also be tested. And, other parameters may be tested.
As shown in block 802 of
As shown at block 804, the fundamental pitch frequency f0 is measured. There are several techniques for measuring f0. One technique is to use a zero-crossing detector to measure the time between the zero-crossings in the speech waveform. If the zero-crossing rate is high, this indicates that noisy, fricative sounds may be present. If the rate is relatively low, then the average rate may be computed and an f0 estimate may be the reciprocal of the average rate.
As shown at block 806, the formant frequencies f1, f2, and f3 may be measured. The formant frequencies may be varied by the shape of the mouth and create the different vowel sounds. Different talkers may use unique ranges of these three frequencies. One method of measuring these parameters is based on linear predictive coding (LPC). LPC may comprise an all-pole filter estimate of the resonances in the speech waveform. The location of the poles may estimate the formant frequencies.
As shown at block 808, the vocal tract length (VTL) is measured. One method of estimating VTL of the talker is based on comparing measured formant frequencies to known relationships between formant frequencies. The best estimate may then be used to derive the VTL from which such formant frequencies are created.
As shown at block 810, the spectral energy content is measured. The measurement of the spectral energy content, such as the high frequency content in the speech, may help identify talkers who have significance sibilance (‘sss’) in their speech. One way to measure this is to compute the ratio of high frequency to total frequency energy during unvoiced (no f0) portions of the speech.
As shown at block 812, the gender is measured. Determining the gender of the talker may be useful as a means for efficient speech database searches. One way to do this is based on f0. Males and females have unique ranges of f0. A low f0 may classify the speech as male and a high f0 may classify the speech as female.
Since speech may be viewed as a dynamic signal, some or all of the above mentioned parameters may vary with time even for a single talker. Thus, it is beneficial to keep track of the relevant statistics of these parameters (block 814) as a basis for finding the optimum set of voices in the speech database. In addition, statistics with multiple modes could identify the presence of multiple talkers in the environment. Examples of relevant statistics may include the average, standard deviation, and upper and lower ranges. In general, a running histogram of each parameter may be maintained to derive the relevant parameters as needed.
As shown at block 816, the optimum set of voices is selected. One method of choosing an optimum set of voices from the speech database is to determine the number of separate talkers in the environment and to measure and keep track of their individual characteristics. In this scenario, it is assumed that individual voices characteristics can be separated. This may be possible for talkers with widely different speech parameters (e.g., male and female). Another method for choosing an optimum voice set is taking the speech input as one “global voice” without regard for individual talker characteristics and determining the speech parameters. This analysis of a “global voice,” even if more than one talker is present, may simplify processing.
During the creation of the speech database, such as the database depicted in
In addition, these correlations may be used as the basis of a statistical analysis, such as to form a linear regression equation, that can be used to predict voice privacy performance given source and disrupter speech parameters. Such an equation takes the following form:
Voice Privacy Level=R0*Δf0+R1*Δf1+R2*Δf2+R3*Δf3+R4*ΔVTL+R5*Δfhigh+etc.+Constant.
This correlation factors R0-Rx may be normalized between zero and one with the more important parameters having correlation factors closer to one.
The above equation may be used to choose the N best speech samples in the database to be output. For example, N may equal 4 so that 4 streams are created. Fewer or greater number of streams may be created, as discussed below.
The measured source speech parameters (see blocks 804, 806, 810, 812) may be input into the equation and the minimum Voice Privacy Level (VPL) is found from calculating the VPL from the stored parameters associated with each disrupter speech in the database. The search may not need to compute VPL for each candidate speech in the database. The database may be indexed such that the best candidate matches can be found for the most important parameter (e.g., f0) and then the equation used to choose the best candidate from this subset in the database.
Referring to
As shown at block 910, the indices are passed for optimum speech samples to the speech stream formation process. And, the speech stream is formed, as shown at block 910.
The speech fragment selection procedure may output its results to the speech stream formation procedure. One type of output of the speech fragment selection procedure may comprise a list of indices pointing to a set of speech fragments that best match the measured voice(s). For example, the list of indices may point to various sets of speech sets depicted in
The speech stream formation process may take the indices (such as 4 indices for one identified talker, see
The voices (Vij; i denoting the target voice) may be combined to form the two speech signals (S1, S2) as shown in the table below.
The process of forming a single, randomly fragmented voice signal (Vij) may be similar to that disclosed in U.S. Provisional Patent Application No. 60/684,141 (incorporated by reference in its entirety). The index into the speech database may point to a collection of speech fragments of a particular voice. These fragments say be of a size of phonemes, diphones, and/or syllables. Each voice may also contain its own set of indices that point to each of its fragments. To create the voice signal, these indices to fragments may be shuffled and then played out one fragment at a time until the entire shuffled list is exhausted. Once a shuffled list is exhausted, the list may be reshuffled and the voice signal continues without interruption. This process may occur for each voice (Vij). The output signals (S1, S2) are the sum of the fragmented voices (Vij) created as described in the Table above.
As discussed above, the talker's input speech may be input in a fragmented manner. For example, the input may comprise several minutes of continuous speech fragments that may already be randomized. These speech fragments may be used to create streams by inserting a time delay. For example, to create 4 different speech streams for a 120 seconds talker input, a time delay of 30 seconds, 60 seconds and 90 seconds may be used. The four streams may then be combined to create two separate channels for output, with the separate channels being stored in stereo format (such as in MP3). The stereo format may be downloaded for play on a stereo system (such as an MP3 player).
As discussed above, the auditory system can also segregate sources if the sources turn on or off at different times. The privacy apparatus may reduce or minimize this cue by outputting a stream whereby random speech elements are summed on one another so that the random speech elements at least partially overlap. One example of the output stream may include generating multiple, random streams of speech elements and then summing the streams so that it is difficult for a listener to distinguish individual onsets of the real source. The multiple random streams may be summed so that multiple speech fragments with certain characteristics, such as 2, 3 or 4 speech fragments that exhibit phoneme characteristics, may be heard simultaneously by the listener. In this manner, when multiple streams are generated (from the talker's voice and/or from another voice(s)), the listener may not be able to discern that there are multiple streams being generated. Rather, because the listener is exposed to the multiple streams (and in turn the multiple phonemes or speech fragments with other characteristics), the listener may be less likely to discern the underlying speech of the talker. Alternatively, the output stream may be generated by first selecting the speech elements, such as random phonemes, and then summing the random phonemes.
Referring to
Referring to
Referring to
The system output function may receive one or a plurality of signals. As shown in the tables above, the system receives the two signals (S1, S2) from the stream formation process. The system may modify the signal (such as adjust the signal's amplitude), and send them to the system loudspeakers in the environment to produce voice privacy. As discussed above, the output signal may be modified or non-modified to various characteristic(s) of the talker(s), listener(s), and/or environment. For example, the system may use a sensor, such as a microphone, to sense the talker's or listener's environment (such as background noise or type of noise), and dynamically adjust the system output. Further, the system may comprise a manual volume adjustment control during the installation procedure to bring the system to the desired range of system output. The dynamic output level adjustment may operate with a slow time constant (such as approximately two seconds) so that the level changes are gentle and not distracting.
Referring to
As discussed above, the privacy apparatus may have several configurations, including a self-contained and a distributed system.
Further, there may be 1, 2, or “N” loudspeakers. The loudspeakers may contain two loudspeaker drivers positioned 120 degrees off axis from each other so that each loudspeaker can provide 180 degrees of coverage. Each driver may receive separate signals. The number of total loudspeakers systems needed may be dependent on the listening environment in which it is placed. For example, some closed conference rooms may only need one loudspeaker system mounted outside the door in order to provide voice privacy. By contrast, a large, open conference area may need six or more loudspeakers to provide voice privacy.
Referring to
Alternatively, the server may randomly select the speech fragments using speech fragment selector unit 1518 and generate multiple voice streams. The multiple voice streams may then be packaged for delivery to the main unit 1502. For example, the multiple voice streams may be packaged into a .wav or an MP3 file with 2 channels (i.e., in stereo) with a plurality of voice streams being summed to generate the sound on one channel and other plurality of voice streams being summed to generate the sound on the second channel. The time period for the .wav or MP3 file may be long enough (e.g., 5 to 10 minutes) so that any listeners may not recognize that the privacy sound is a .wav file that is repeatedly played. Still another distributed system comprises one in which the database is networked and stored in the memory 1506 of main unit 1502.
In summary, speech privacy is provided that may be based on the voice of the person speaking and/or voice(s) other than the person speaking. This may permit the privacy to occur at lower amplitude than previous maskers for the same level of privacy. This privacy may disrupt key speech interpretation cues that are used by the human auditory system to interpret speech. This may produce effective results with a 6 dB advantage or more over white/pink noise privacy technology.
It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. For example, the geometries and material properties discussed herein and shown in the embodiments of the figures are intended to be illustrative only. Other variations may be readily substituted and combined to achieve particular design goals or accommodate particular materials or manufacturing processes.
This application is a continuation-in-part of U.S. patent application Ser. No. 11/326,269 filed on Jan. 4, 2006, which claims the benefit of U.S. Provisional Application No. 60/642,865, filed Jan. 10, 2005, the benefit of U.S. Provisional Application No. 60/684,141, filed May 24, 2005, and the benefit of U.S. Provisional Application No. 60/731,100, filed Oct. 29, 2005. U.S. patent application Ser. No. 11/326,269, U.S. Provisional Application No. 60/642,865, U.S. Provisional Application No. 60/684,141, and U.S. Provisional Application No. 60/731,100 are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
3541258 | Doyle et al. | Nov 1970 | A |
3718765 | Halaby | Feb 1973 | A |
3879578 | Wildi | Apr 1975 | A |
4068094 | Schmid et al. | Jan 1978 | A |
4099027 | Whitten | Jul 1978 | A |
4195202 | McCalmont | Mar 1980 | A |
4232194 | Adams | Nov 1980 | A |
4438526 | Thomalla | Mar 1984 | A |
4852170 | Bordeaux | Jul 1989 | A |
4905278 | Parker | Feb 1990 | A |
5036542 | Kehoe et al. | Jul 1991 | A |
5355430 | Huff | Oct 1994 | A |
5781640 | Nicolino, Jr. | Jul 1998 | A |
6188771 | Horrall | Feb 2001 | B1 |
6888945 | Horrall | May 2005 | B2 |
7143028 | Hillis et al. | Nov 2006 | B2 |
20030091199 | Horrall et al. | May 2003 | A1 |
20040019479 | Hillis et al. | Jan 2004 | A1 |
20040125922 | Specht | Jul 2004 | A1 |
20050065778 | Mastrianni et al. | Mar 2005 | A1 |
20060009969 | L'Esperance et al. | Jan 2006 | A1 |
20060109983 | Young et al. | May 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20070203698 A1 | Aug 2007 | US |
Number | Date | Country | |
---|---|---|---|
60642865 | Jan 2005 | US | |
60684141 | May 2005 | US | |
60731100 | Oct 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11326269 | Jan 2006 | US |
Child | 11588979 | US |