The invention relates to producing sound, and more particularly to communication components for producing sound for received audio streams.
In speech recognition systems and other speech-based system, a Text-to-Speech (TTS) audio stream is generally created by a TTS engine. A TTS engine takes text data and converts the text into spoken words in an audio stream which may then be played back on a variety of audio production devices, where the audio stream includes an audio waveform and may include other data related to the audio waveform. When used in conjunction with speech recognition circuitry that recognizes a user's speech or speech utterances, a TTS will allow an ongoing spoken dialog between a user and a speech-based system, such as for performing speech-directed work.
Those skilled in the art recognize that a phoneme is the smallest segmental unit of sound employed in a language to form meaningful contrasts between utterances. In the English language, for example, there are approximately 44 phonemes, which when used in combinations may form every word in the English language. A TTS engine generally performs the conversion from text to an audio stream by splitting each word in the text string into a sequence of the word's component phonemes. Then the units of sound for each of the phonemes in the sequence are connected in sequential order into an audio stream that can be played on a variety of sound production devices.
When a TTS engine generates a TTS audio waveform from text, the TTS engine may output metadata that corresponds to the generated audio waveform. This metadata generally contains a text representation of each phoneme provided in the audio stream and may also provide an indication of the position of the phoneme in the audio waveform (i.e. where the phoneme occurs when the audio waveform is produced for listening).
TTS engines and the creation of audio streams based on text data technologies have been widely used in a variety of communication technologies such as automated systems that provide audio feedback and/or instructions to a user. TTS engines and the creation of audio streams based on text data have been used in speech-based work environments to provide workers with audio instructions related to tasks the workers are to perform. In these systems, a worker is typically equipped with a portable terminal device that receives data from a management computer over a communication network, such as a wireless network. The link between the terminal device and the management computer or central system is usually a wireless link, such as Wi-Fi link. The data generally comprises instructions for the worker, either in text or audio format. In these systems, the terminal may convert received text data to an audio stream or the management computer may convert the text to an audio stream prior to transmitting the instructions to the terminal. The generated audio stream may include an audio waveform and metadata associated with the audio waveform, and may be generated using a TTS engine, audio recordings, or a combination.
Generally, the audio stream is produced as sound for the worker through use of a communication component that is in communication with the management computer and/or the terminal device. The communication component may be, for example, a headset having a speaker for production and a microphone for voice input, or similar devices. The audio stream, which includes an audio waveform and has the instructions in audio format, is received by the communication component and produced as sound or speech for the worker.
Conventional systems and methods for producing sound involve playing a storage buffer containing the audio waveform that has been received when a predetermined amount of data has been received. In optimal conditions, playback of the audio waveform by a conventional system will consume more time than it takes to receive a subsequent audio waveform and provide it to a production buffer. Hence, the transition from the audio waveform being produced to the playback of the subsequent audio waveform should occur without any noticeable indication of the transition in the production of the sound to the user of the terminal device and any communication component.
However, in conventional systems, delay in the reception of data, such as a delay from a wireless link, may lead to the situation where audio playback or production of a received audio waveform completes before a subsequent audio stream and audio waveform has been fully received into the buffer. This delay in buffering the audio waveforms often leads to what can be generally described as “choppy” production of sound for the user. Other common descriptions of this occurrence include “skipping,” “popping,” “stuttering,” etc. In short, the delay causes the production of sound to have a delay where production must wait for a subsequent audio stream and audio waveform to be received into the buffer. As mentioned, the cause of the skipping in the production is due to a failure to fully buffer the subsequent audio waveform before production of the previous audio waveform ends. In many communication systems, these breaks in production may be caused by delays in receiving and/or processing the received audio streams, such as over a wireless communication link.
In communication systems that involve producing sound that includes spoken words or speech, the skipping that is due to delay in the system can result in unintelligible or inaccurate sound being produced for a user of the communication component. Depending on the specific application of the communication system that transmits audio feedback and/or instructions to a user, an unintelligible or inaccurate production of audio in the system can render a conventional system unusable for its intended purpose. Overall, the effects of the errors in production described may be considered to affect the quality of the produced sound for a user of the communication component, leading to degraded intelligibility, clarity, usability and/or accuracy.
As discussed, in conventional systems, any delay in receiving and/or processing a subsequent audio waveform leads to skipping. Some techniques can be used to address this issue. Compressing the waveform reduces the time it takes to transfer the waveform and reduces the likelihood that a delay will interrupt playback. However, this is not always adequate and does not address intelligibility when a dropout does occur.
Another technique is to buffer all of or a portion of the waveform on the receiving side before starting playback. The downside of this approach is that it can cause a delay before playback is started while the receiver waits for the waveform to be received. However, this delay is unnecessary in cases when the waveform is transferred at a faster rate than it is being played, so it would be desirable to eliminate it when possible.
Another technique used to address this issue is for the receiver to repeat a portion of the audio. When the receiver of some systems does not receive the next segment of the waveform to be played in time (i.e. before it finishes playing what it has received), it repeatedly plays the last segment of audio that it has received to fill time until it receives the next portion of the waveform. This can prevent the audio from dropping out, but when the portion of the waveform that is repeated is not stationary or periodic, it can produce uneven sounds (clicks and stuttering).
For a wireless headset in industrial environments, when transaction rates are high, the average latency (of delivering verbal instructions to the user wearing a wireless headset) can have a meaningful effect on the value of the system. It can also affect worker acceptance of the system.
Intelligibility and smoothness is also important to the system value and worker acceptance. Difficult to understand and/or choppy audio can cause worker delays and can adversely affect worker acceptance of the system.
Accordingly, there is a need, unmet by conventional communication systems, to address unintelligible or inaccurate production of sound from audio waveforms and speech due to delay in receiving and/or processing in the communication component.
An apparatus and method are provided to mitigate the effects of delay in receiving and/or processing audio waveform on the quality of production of sound from audio waveforms.
The apparatus includes transceiving circuitry configured to receive an audio stream. The audio stream includes an audio waveform. Memory, such as a buffer, is configured to store the received audio stream. Circuitry is configured to produce sound using the audio waveform. Processing circuitry is configured to analyze the received audio stream and identify at least one modification segment of the audio waveform. The modification segment corresponds to a segment of the audio waveform where production of the audio waveform may be modified to mitigate a delay in receiving the audio stream. The processing circuitry drives production of sound using the audio waveform based at least in part on the identified modification segment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the detailed description of the embodiments given below, serve to explain the principles of the invention.
It should be understood that the appended drawings are not necessarily to scale, presenting a somewhat simplified representation of various features illustrative of the basic principles of the invention. The specific design features of the sequence of operations as disclosed herein, including, for example, specific dimensions, orientations, locations, and shapes of various illustrated components, will be determined in part by the particular intended application and use environment. Certain features of the illustrated embodiments have been enlarged or distorted relative to others to facilitate visualization and clear understanding. In particular, thin features may be thickened, for example, for clarity or illustration.
Embodiments of the invention include systems and methods directed towards improving the intelligibility and clarity of production of sound in communication systems having communication components receiving audio from a communication network and producing sound based on the received audio. More specifically, embodiments of the invention mitigate the effects of delay in receiving and processing audio waveforms by modifying production.
In work environments, a worker may receive an audio stream using a worker communication component connected to a communication network. The audio stream may typically include an audio waveform, where the audio waveform provides audio or speech instructions corresponding to tasks the worker is supposed to perform. Generally, the worker communication component then produces sound based on the audio waveform for the worker using audio production circuitry, such as a speaker, and processing circuitry drives the audio production circuitry to produce the sound based on or using the received audio waveform.
In one exemplary embodiment of the invention, as discussed below, the communication component is in the form of a wireless device that has a wireless link to a computer, such as a portable computer device. However, the overall invention is not limited to such an example. With reference to
As shown in
Headset 42 and the various other components coupled therewith through one or more wireless communication networks 48 might implement different networks. For example, in one embodiment of the invention, a wireless headset 42 such as an SRX® device available from Vocollect, Inc. of Pittsburgh, Pa., is used in conjunction with a portable terminal device 50, such as a TALKMAN® device, also available from Vocollect, Inc. Headset 42 may couple directly with terminal device 50 through a suitable short-range network, such as a Bluetooth link, as indicated by link 60, in
While one exemplary device for practicing the invention is the TALKMAN® device from Vocollect, Inc., as those skilled in the art will recognize, device 50 may comprise any number of devices including a processor and memory, including for example, a personal computer, laptop computer, hand-held computer, smart-phone, server computer, server computer cluster, and the like. Moreover, as shown in
In accordance with one embodiment of the invention, headset 42 acts as a receiver to receive an audio stream, including an audio waveform, to play to a user through a speaker. Such an audio waveform may come from mobile computer device 50, or some other device, as illustrated in
With reference to
In other embodiments of the invention, the communication device processing circuitry determines the expected time needed to receive a subsequent audio stream. That subsequent audio stream might also be a portion of the audio stream that is remaining to be sent, or might be the portion of the audio stream that includes the next modification segment. In some embodiments, determining the expected time needed to receive a subsequent portion of the audio stream from a communication network may include receiving data over the communication network that indicates the size of the subsequent portion of the audio stream and analyzing the received data to determine the size of the subsequent audio stream that is remaining or not yet received. Such information regarding the size of the data may be embedded in the header for that data, for example. In some embodiments, determining the expected time needed to receive a subsequent audio stream may include analyzing data associated with the communication network, where the data may indicate one or more characteristics of the communication network, including, for example, historical transceiving rates of the communication network, bandwidth of the communication network, or other such communication network characteristics. In these embodiments, determining the expected time needed to receive a subsequent portion of the audio stream may be based at least in part on the determined size of the subsequent audio stream and/or one or more communication network characteristics. Such a parameter as the expected time to receive a subsequent portion of the audio stream, might also be compared to a threshold (block 106) to determine if it will be necessary to modify production.
The communication device processing circuitry is configured to determine whether a delay in sound production may occur based on a comparison of the production time of current audio data to the time expected to receive additional or subsequent audio data. That difference might also be compared to a threshold (block 106). Therefore, in some embodiments, the threshold comparison is based on the comparison of the remaining audio versus a threshold. In another embodiment, the expected time to receive the subsequent audio stream or a remaining portion of a current audio stream might be compared to a threshold. In still other embodiments, the communication device circuitry analyzes the determined remaining production time of the audio waveform and also the determined expected time needed to receive the subsequent audio stream or the remaining portion of a current audio stream, and compares it against some threshold, to determine whether production of the audio waveform may end before the subsequent audio stream has been received. As noted, if the communication component determines that production of the audio waveform will not end before receiving the subsequent audio stream, production is not modified (block 108), and would proceed as normal.
However, if the communication device processing circuitry determines that production of the audio waveform may end before the subsequent audio stream or portion of an audio stream will be received, production of the audio waveform may be modified (block 110).
While flowchart 100 has been discussed in a general scenario as a serial progression, the invention is not so limited. As such, the analysis and determining operations discussed above with respect to flowchart 100 may be performed substantially in parallel, such that as the audio waveform is being produced, the communication component is determining the expected time needed to receive the subsequent audio stream, or portion of an audio stream, whether a delay will occur, whether to modify production, etc.
Moreover, in many embodiments, the operations described in flowchart 100 may be repeated or performed continuously, such that the communication component may determine whether to modify production of the audio waveform as the audio waveform is being produced. In these embodiments, the communication device receives and analyzes data indicating network characteristics, data associated with a subsequent audio stream, and other such data to determine whether to modify production of the audio waveform substantially in real-time. As such, the communication component may change between not modifying production and modifying production dynamically and in response to changes in the network characteristics, the subsequent audio stream, etc.
Once it has been determined that modification is necessary, the processing circuitry of the communication device, such as headset 42, is configured to identify those segments in the audio waveform that can be modified without significantly degrading the intelligibility of the produced waveform. In one embodiment of the invention, the processing circuitry is configured to identify segments in the waveform that can be extended and/or repeated without significantly degrading the intelligibility of the waveform. Such identified segments are generally referred to herein as “modification segments”, and can be determined in a number of different ways in accordance with aspects of the invention.
Referring now to
The identified modification segments of the audio waveform are those segments of the waveform that correspond to portions or parts of the waveform where sound production may be modified while the quality of the sound production may not be substantially affected. As such, production of sound based on or using the audio waveform may be modified at the identified modification segments such that the effects in the production quality due to delays in receiving and/or processing the audio stream may be mitigated. As discussed further below, modification of production includes, for example, in one embodiment, extending a waveform by pausing or delaying production of sound based on the audio waveform for a desired amount of time or time period at one or more modification segments or decreasing the rate of production of sound based on the audio waveform at each modification segment. In another embodiment, certain sounds or portions of the waveform are extended at the modification segments. As such, embodiments consistent with the invention extend the time of production of sound based on the audio waveform thereby increasing the amount of time before production ends, which in turn, allows increased time to receive a subsequent audio stream, and provides such extension in a way that mitigates degradation of sound production quality. As such, the communication device processing circuitry produces sound using the audio waveform based at least in part on the identified modification segments (block 118).
In some embodiments of the invention, the audio stream received from a transmitting component, such as mobile device 50, may include just a sampled audio waveform. In other embodiments, the audio stream may include the sampled audio waveform, along with metadata. The metadata may include the word or phoneme sequence that is produced along with synchronization information and which identifies the places in the waveform that the word or phoneme occurs. In one embodiment of the invention, as discussed further hereinbelow, the metadata is utilized for determining the noted modification segments in the audio waveform. In another embodiment of the invention when the metadata is not available, the processing circuitry of the receiving communication device, such as the headset 42, is configured to analyze the audio waveform looking for suitable modification segments. In accordance with the aspects of the invention, the modification segments are those identified segments for which intelligibility of the produced audio is not substantially reduced when the sound or the lack of sound is extended.
In accordance with embodiments of the invention, a segment of an audio waveform that would fit this criterion includes the natural language pauses or stops between words in the audio waveform. As such, one embodiment of the invention recognizes and utilizes such pauses or stops as the modification segments. Production can be paused at those pauses or stops of the invention and extends those pauses or stops to make them longer pauses. In another embodiment of the invention, the natural stops of the spoken language are used, based upon identified phonemes from the metadata. That is, the natural stops in spoken language, which are often referred to as “voiceless glottal plosives” are used. For example, certain portions of words in English include certain pronunciations where no sound is being produced, such as before the release of air through the vocal tract that would complete the phoneme. Such modification segments could include those phonemes that typically include no sound (stationary), or also those phonemes that might be considered quasi-stationary, as discussed further hereinbelow.
Referring to
With respect to the exemplary audio waveform 162, the processing circuitry of device 42 is configured to analyze the audio waveform 162 using known signal processing methods to determine segments having low amplitude, such as segments 164, 168, and 170.
As described above, the processing circuitry may be configured to analyze the audio waveform of the received audio stream using known signal processing methods to identify modification segments, where the modification segments correspond to segments of the audio waveform that are quasi-stationary. That is, segments of the audio waveform where the sound is constant or generally constant in its amplitude envelope, or has almost constant short-time energy or almost constant short-time spectrum are considered quasi-stationary. With reference to exemplary audio waveform 162, some embodiments of the invention may analyze the audio waveform 162 and identify segments such as segments 166 and 172 of exemplary audio waveform 162 as modification segments, as discussed above with respect to quasi-stationary segments.
Exemplary graph 160 illustrates a simplified audio waveform 162 for exemplary purposes. In some embodiments consistent with the invention, an audio waveform may be analyzed using known signal processing methods to determine segments that are defined as low-amplitude and/or quasi-stationary. The audio waveform to be produced may be a digitally sampled audio waveform. Those skilled in the art will recognize that a digitally sampled audio waveform comprises data including discrete values which represent the amplitude of an audio waveform taken at different points in time and as such, digital signal processing might be implemented by the processing circuitry of the device 42, 50 doing the analysis.
As noted above, a TTS engine accepts text as input. The TTS engine then produces a sampled audio waveform corresponding to the input text. The audio waveform is typically in a raw PCM format, which can be written directly to an audio CODEC to then be played by a speaker or other sound production circuitry. In one embodiment of the invention, the TTS may also produce metadata along with the sample audio waveform. The metadata may include the word, phoneme, or sound sequence being produced, along with its synchronization information. The synchronization information identifies where in the waveform the word, phoneme, or sound occurs. As such, the processing circuitry may analyze the associated metadata to determine positions of sound types associated with a desired subset of phonemes or sounds in the audio waveform (block 182). The metadata may also include lip position information being produced, along with its synchronization information. Lip position information is sometimes provided by a TTS to synchronize an avatar's face with the audio. The synchronization information identifies where in the waveform the word or phoneme occurs.
The metadata or subset of phonemes or sounds may correspond to natural pauses in the audio waveform or in pronunciation. Phonemes that have natural pauses or stops in the English language, include for example, the phonemes associated with the letters “t”, “p”, “k”, and “ch” and other phonemes that have segments where no sound is produced (i.e. a pause or period of no sound may occur while speaking a word containing the phoneme). Therefore, the subset of phonemes or sounds may correspond to phonemes with stops that may provide corresponding points to pause production or repeat and/or extend the sound without significantly degrading the quality of the production. Also, quasi-stationary phonemes and sounds may be considered to be types of sounds that may be repeated and/or extended without significantly degrading the quality of the production. For example, in the English language, the sounds associated with phonemes related to vowels (i.e., sounds associated with letters such as “a”, “e”, “i”, “o”, and “u”), or fricatives (i.e., sounds associated with the letters such as “v”, “f”, “th”, “z”, “s”, “y”, and “sh”) may, to some extent, often be extended or repeated in production without significantly degrading the quality. The processing circuitry is configured to identify segments of the audio waveform that correspond to the middle or quasi-stationary segments of the waveform of the desired phonemes as modification segments (block 184). Likewise, lip position information may be used to identify quasi-stationary segments of the audio waveform. Thus, types of sounds that may be considered modification segments may include, for example, stops, vowels, fricatives, low amplitude and quasi-stationary.
Once the various modification segments for a waveform have been determined, the waveform is produced in order to use those modification segments to extend the waveform. In accordance with one feature of the invention, the waveform may be extended by repeating or elongating the production of the waveform at a particular modification segment. Extending the waveform might also be considered to be performed by repeating or elongating a natural stop or modification segment that corresponds to a low amplitude segment of the waveform. In another aspect of the invention, the sounds associated with phonemes that are quasi-stationary, such as phonemes related to the vowels or fricatives may be extended or repeated for extending the waveform. Note that when extending some waveforms, care must be taken to prevent unnaturally rapid transitions which could cause clicks in the audio. Roucos and Wilgus describe one way to do this in “High Quality Time-Scale Modification for Speech,” IEEE Int. Conf. Acoust., Speech, Signal Processing, Tampa, Fla., March 1985, pp. 493-496, which is incorporated herein by reference in its entirety.
In some embodiments, the communication device processing circuitry analyzes the remaining time for production of an audio waveform included in a received audio stream. Also, an expected time to receive a subsequent audio stream might be evaluated to determine a suitable modification duration for a modification step (block 222). As such, the modification duration may be determined as the additional time expected to receive the subsequent audio stream after production of the audio waveform ends. The processing circuitry of the communication device or other device analyzes the identified modification segments of the audio waveform that is queued for production or the identified modification segments of the audio waveform that is currently being produced, and the communication device determines the modification duration, or the amount of time the production of each identified modification segment must be extended such that the total extended production time of the audio waveform will be similar to or greater than the expected time to receive and/or process the subsequent audio stream (block 224).
The communication device processing circuitry is configured to perform one or more operations to thereby extend production of the audio waveform (block 226). In one embodiment of the invention, the processing circuitry is configured to provide such an extension for at least one of the modification segments that have been recognized. Such an extension may be suitable for handling a short delay time for receiving the next subsequent audio waveform. Alternatively, the processing circuitry may recognize multiple modification segments and may provide an extension at each of the multiple segments in order to cumulatively create a delay in the production in the audio waveform for the purposes of the invention. Extending the waveform at a modification segment may take various forms.
In some embodiments, the communication component may extend the waveform by pausing production of sound for a desired amount of time at an identified modification segment. Pausing production at a modification segment may be implemented, for example, when the modification segment indicates a pause or stop in the waveform. As noted above, such a pause or stop may be indicative of a pause between words in the waveform, or might be indicated by a natural language stop for certain phonemes. As such, production might be paused for a desirable delay time at one or more modification segments in order to receive the rest of the audio stream or the subsequent audio stream so that there is not a broken sound production that affects the intelligibility of the sound or speech. As discussed further herein, another embodiment of the invention extends the sound at a particular modification segment. As may be appreciated, pausing production of sound might be considered to be extending the sound or lack of sound associated with a natural pause in the waveform.
In another embodiment of the invention, the communication device processing circuitry is configured to extend the waveform at a modification segment by extending production of sound at one or more identified modification segments. In these embodiments, the sound or lack of sound at each modification segment may be extended, such as by repeating the identified modification segment or the sound associated therewith, such that the reproduction time for the waveform is suitably extended or delayed. Advantageously, extending the sound of a waveform at an identified modification segment may be performed at identified modification segments corresponding to stationary or quasi-stationary segments of the audio waveform. Extending the sound or lack of sound at stationary and/or quasi-stationary segments of the audio waveform, such as by repeating the modification segment at certain portions of the waveform, like a natural language stop, may have a similar effect as essentially pausing production as noted above. Extending the waveform or sound for stationary and quasi-stationary modification segments mitigates any degradation in the quality of the produced sound.
While
Furthermore, the exemplary
Modification of production has been illustrated in the exemplary figures discussed above corresponding to modification segments that are repeated or inserted and have substantially equal duration, but the invention is not so limited. As such, a communication device consistent with embodiments of the invention may vary the modification duration or length of the pause or repeated or extended segments as necessary during production at the identified modification segments in order to achieve the desired waveform extension. For example, the duration of the inserted pauses or repeated or extended segments might vary based at least in part on how long it is expected to take to receive the subsequent portion of the waveform with the next modification segment and/or other variables, including for example, the production time duration of the identified modification segment, the type of modification segment identified, the specific sound or phoneme corresponding to the identified modification segment, etc.
The invention has been described herein with respect to the processing circuitry of the communication component, such as a headset, but the invention is not so limited. In some embodiments consistent with the invention, analysis and identification of the audio stream may be performed by a remote computer, portable terminal or other such transmitting devices and the processing circuitry therein. In these embodiments, modification data indicating the position of the identified modification segments in an audio waveform may be included in an audio stream along with the associated audio waveform for transmission to the communication device, such as a headset. In some embodiments, the communication device, such as the headset, may then analyze the transmitted modification data, and the communication component may then modify sound production based on the transmitted analyzed modification data of the received audio stream.
A computer or processing device (e.g., a headset, a portable terminal, mobile computer, remote computer, smart-phone, tablet computer, or other such device) analyzes an audio stream, as noted, to identify modification segments of the audio waveform (block 342). As discussed previously, the audio stream includes an audio waveform and may include metadata associated with the audio waveform, and the analysis of the audio stream may include analyzing the audio waveform and/or the associated metadata to indicate suitable modification segments.
The processing or computer device generates modification segment data based at least in part on the identified modification segments (block 344), where the modification data indicates the position of modification segments in the audio waveform included in the audio stream. If the processing occurs at a location (e.g., device 50) other than where the sound is produced, (e.g., the headset), the computing or processing device may package the generated modification data in the audio stream as header data for the included audio stream, such that the modification data will be read by a production device (e.g., headset 42) prior to producing the included audio waveform. As such, in these embodiments, when the audio waveform is loaded for sound production, the position of the modification segments in the audio waveform will be identified for the receiving and producing device.
The analyzed audio stream and modification data are stored in a buffer data structure of the memory of the communication device 42 (block 346). If the analyzed audio stream is sent from another device, the audio stream might be stored in a buffer data structure in the memory of the communication component as the audio stream is received.
The communication component dynamically monitors the audio stream and modification data in the buffer to determine if the buffered audio waveform includes any identified modification segments (block 352). In response to determining that the buffered audio waveform includes modification segments, the communication device queues up for production the audio waveform up to and including the last identified modification segment stored in the buffer,
While the communication device 42 produces the audio waveform it has received, the communication device continues to transceive and buffer a subsequent audio stream or a continuing portion of an audio stream (block 346), such that production of the subsequent audio stream may begin following the end of production of the previous audio stream or previous audio stream portion. As discussed previously, in accordance with the invention, the communication device 42 may modify production of the loaded audio waveform at the identified modification segments appropriately to mitigate delays in receiving and processing the remaining or subsequent audio stream or audio stream portion. Thus, in these embodiments, the communication component may modify the production to extend the waveform as appropriate such that the production time is extended, thereby extending the time that a subsequent audio stream may be received and buffered.
Therefore, in some embodiments, the communication device 42 may delay production until the buffer includes at least one modification segment or the buffer is full. In these embodiments, production of sound is generally delayed at the noted modification segments as opposed to random locations in an audio waveform that coincide with the end of the buffer. This improves the quality of the production, while also increasing the speed at which production may begin by not waiting for as much data to be received as would otherwise be needed to mitigate choppiness.
Accordingly, as the waveform data is buffered and placed in a queue as illustrated in
The modification segments can be identified before or after the audio stream is sent over the communication channel, and the invention is not limited to either scenario, and would cover both. The identification of modification segments could be done before the audio stream is transmitted, or could be done at the receiver, after the audio stream has been received. Therefore, the flow of chart 340 in
While embodiments of the invention have been illustrated by a description of the various embodiments and the examples, and while these embodiments have been described in considerable detail, it is not the intention of the applicants to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Thus, embodiments of the invention in broader aspects are therefore not limited to the specific details, representative apparatus and method. Moreover, any of the blocks of the above flowcharts may be deleted, augmented, made to be simultaneous with another, combined, or be otherwise altered in accordance with the principles of the embodiments of the invention. Accordingly, departures may be made from such details without departing from the scope of applicant's general inventive concept.
Other modifications will be apparent to one of ordinary skill in the art. Therefore, the invention lies in the claims hereinafter appended.
Number | Name | Date | Kind |
---|---|---|---|
4882757 | Fisher et al. | Nov 1989 | A |
4928302 | Kaneuchi et al. | May 1990 | A |
4959864 | Van Nes et al. | Sep 1990 | A |
4977598 | Doddington et al. | Dec 1990 | A |
5127043 | Hunt et al. | Jun 1992 | A |
5127055 | Larkey | Jun 1992 | A |
5230023 | Nakano | Jul 1993 | A |
5297194 | Hunt et al. | Mar 1994 | A |
5349645 | Zhao | Sep 1994 | A |
5428707 | Gould et al. | Jun 1995 | A |
5457768 | Tsuboi et al. | Oct 1995 | A |
5465317 | Epstein | Nov 1995 | A |
5488652 | Bielby et al. | Jan 1996 | A |
5566272 | Brems et al. | Oct 1996 | A |
5602960 | Hon et al. | Feb 1997 | A |
5625748 | McDonough et al. | Apr 1997 | A |
5640485 | Ranta | Jun 1997 | A |
5644680 | Bielby et al. | Jul 1997 | A |
5651094 | Takagi et al. | Jul 1997 | A |
5684925 | Morin et al. | Nov 1997 | A |
5710864 | Juang et al. | Jan 1998 | A |
5717826 | Setlur et al. | Feb 1998 | A |
5737489 | Chou et al. | Apr 1998 | A |
5737724 | Atal et al. | Apr 1998 | A |
5774841 | Salazar et al. | Jun 1998 | A |
5774858 | Taubkin et al. | Jun 1998 | A |
5797123 | Chou et al. | Aug 1998 | A |
5799273 | Mitchell et al. | Aug 1998 | A |
5832430 | Lleida et al. | Nov 1998 | A |
5839103 | Mammone et al. | Nov 1998 | A |
5842163 | Weintraub | Nov 1998 | A |
5870706 | Alshawi | Feb 1999 | A |
5893057 | Fujimoto et al. | Apr 1999 | A |
5893059 | Raman | Apr 1999 | A |
5893902 | Transue et al. | Apr 1999 | A |
5895447 | Ittycheriah et al. | Apr 1999 | A |
5899972 | Miyazawa et al. | May 1999 | A |
5946658 | Miyazawa et al. | Aug 1999 | A |
5960447 | Holt et al. | Sep 1999 | A |
5970450 | Hattori | Oct 1999 | A |
6003002 | Netsch | Dec 1999 | A |
6006183 | Lai et al. | Dec 1999 | A |
6073096 | Gao et al. | Jun 2000 | A |
6076057 | Narayanan et al. | Jun 2000 | A |
6088669 | Maes | Jul 2000 | A |
6094632 | Hattori | Jul 2000 | A |
6101467 | Bartosik | Aug 2000 | A |
6122612 | Goldberg | Sep 2000 | A |
6151574 | Lee et al. | Nov 2000 | A |
6182038 | Balakrishnan et al. | Jan 2001 | B1 |
6192343 | Morgan et al. | Feb 2001 | B1 |
6205426 | Nguyen et al. | Mar 2001 | B1 |
6230129 | Morin et al. | May 2001 | B1 |
6233555 | Parthasarathy et al. | May 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6246980 | Glorion et al. | Jun 2001 | B1 |
6292782 | Weideman | Sep 2001 | B1 |
6330536 | Parthasarathy et al. | Dec 2001 | B1 |
6374212 | Phillips et al. | Apr 2002 | B2 |
6374220 | Kao | Apr 2002 | B1 |
6374221 | Haimi-Cohen | Apr 2002 | B1 |
6377662 | Hunt et al. | Apr 2002 | B1 |
6377949 | Gilmour | Apr 2002 | B1 |
6397179 | Crespo et al. | May 2002 | B2 |
6397180 | Jaramillo et al. | May 2002 | B1 |
6421640 | Dolfing et al. | Jul 2002 | B1 |
6438519 | Campbell et al. | Aug 2002 | B1 |
6438520 | Curt et al. | Aug 2002 | B1 |
6487532 | Schoofs et al. | Nov 2002 | B1 |
6496800 | Kong et al. | Dec 2002 | B1 |
6505155 | Vanbuskirk et al. | Jan 2003 | B1 |
6507816 | Ortega | Jan 2003 | B2 |
6526380 | Thelen et al. | Feb 2003 | B1 |
6539078 | Hunt et al. | Mar 2003 | B1 |
6542866 | Jiang et al. | Apr 2003 | B1 |
6567775 | Maali et al. | May 2003 | B1 |
6571210 | Hon et al. | May 2003 | B2 |
6581036 | Varney, Jr. | Jun 2003 | B1 |
6587824 | Everhart et al. | Jul 2003 | B1 |
6594629 | Basu et al. | Jul 2003 | B1 |
6598017 | Yamamoto et al. | Jul 2003 | B1 |
6606598 | Holthouse et al. | Aug 2003 | B1 |
6629072 | Thelen et al. | Sep 2003 | B1 |
6675142 | Ortega et al. | Jan 2004 | B2 |
6701293 | Bennett et al. | Mar 2004 | B2 |
6732074 | Kuroda | May 2004 | B1 |
6735562 | Zhang et al. | May 2004 | B1 |
6754627 | Woodward | Jun 2004 | B2 |
6766295 | Murveit et al. | Jul 2004 | B1 |
6799162 | Goronzy et al. | Sep 2004 | B1 |
6832224 | Gilmour | Dec 2004 | B2 |
6834265 | Balasuriya | Dec 2004 | B2 |
6839667 | Reich | Jan 2005 | B2 |
6856956 | Thrasher et al. | Feb 2005 | B2 |
6868381 | Peters et al. | Mar 2005 | B1 |
6871177 | Hovell et al. | Mar 2005 | B1 |
6876987 | Bahler et al. | Apr 2005 | B2 |
6879956 | Honda et al. | Apr 2005 | B1 |
6882972 | Kompe et al. | Apr 2005 | B2 |
6910012 | Hartley et al. | Jun 2005 | B2 |
6917918 | Rockenbeck et al. | Jul 2005 | B2 |
6922466 | Peterson et al. | Jul 2005 | B1 |
6922669 | Schalk et al. | Jul 2005 | B2 |
6941264 | Konopka et al. | Sep 2005 | B2 |
6961700 | Mitchell et al. | Nov 2005 | B2 |
6961702 | Dobler et al. | Nov 2005 | B2 |
6985859 | Morin | Jan 2006 | B2 |
6999931 | Zhou | Feb 2006 | B2 |
7031918 | Hwang | Apr 2006 | B2 |
7035800 | Tapper | Apr 2006 | B2 |
7039166 | Peterson et al. | May 2006 | B1 |
7050550 | Steinbiss et al. | May 2006 | B2 |
7058575 | Zhou | Jun 2006 | B2 |
7062435 | Tzirkel-Hancock et al. | Jun 2006 | B2 |
7062441 | Townshend | Jun 2006 | B1 |
7065488 | Yajima et al. | Jun 2006 | B2 |
7069513 | Damiba | Jun 2006 | B2 |
7072750 | Pi et al. | Jul 2006 | B2 |
7072836 | Shao | Jul 2006 | B2 |
7103542 | Doyle | Sep 2006 | B2 |
7103543 | Hernandez-Abrego et al. | Sep 2006 | B2 |
7203644 | Anderson et al. | Apr 2007 | B2 |
7203651 | Baruch et al. | Apr 2007 | B2 |
7216148 | Matsunami et al. | May 2007 | B2 |
7225127 | Lucke | May 2007 | B2 |
7266494 | Droppo et al. | Sep 2007 | B2 |
7319960 | Riis et al. | Jan 2008 | B2 |
7386454 | Gopinath et al. | Jun 2008 | B2 |
7392186 | Duan et al. | Jun 2008 | B2 |
7401019 | Seide et al. | Jul 2008 | B2 |
7406413 | Geppert et al. | Jul 2008 | B2 |
7430509 | Jost et al. | Sep 2008 | B2 |
7454340 | Sakai et al. | Nov 2008 | B2 |
7457745 | Kadambe et al. | Nov 2008 | B2 |
7493258 | Kibkalo et al. | Feb 2009 | B2 |
7542907 | Epstein et al. | Jun 2009 | B2 |
7565282 | Carus et al. | Jul 2009 | B2 |
7684984 | Kemp | Mar 2010 | B2 |
7827032 | Braho et al. | Nov 2010 | B2 |
7865362 | Braho et al. | Jan 2011 | B2 |
7895039 | Braho et al. | Feb 2011 | B2 |
7949533 | Braho et al. | May 2011 | B2 |
7983912 | Hirakawa et al. | Jul 2011 | B2 |
8200495 | Braho et al. | Jun 2012 | B2 |
8255219 | Braho et al. | Aug 2012 | B2 |
8374870 | Braho et al. | Feb 2013 | B2 |
20020138274 | Sharma et al. | Sep 2002 | A1 |
20020143540 | Malayath et al. | Oct 2002 | A1 |
20020152071 | Chaiken et al. | Oct 2002 | A1 |
20020178004 | Chang et al. | Nov 2002 | A1 |
20020198712 | Hinde et al. | Dec 2002 | A1 |
20030023438 | Schramm et al. | Jan 2003 | A1 |
20030120486 | Brittan et al. | Jun 2003 | A1 |
20030191639 | Mazza | Oct 2003 | A1 |
20030220791 | Toyama | Nov 2003 | A1 |
20040215457 | Meyer | Oct 2004 | A1 |
20050049873 | Bartur et al. | Mar 2005 | A1 |
20050055205 | Jersak et al. | Mar 2005 | A1 |
20050071161 | Shen | Mar 2005 | A1 |
20050080627 | Hennebert et al. | Apr 2005 | A1 |
20080008281 | Abrol et al. | Jan 2008 | A1 |
20110029312 | Braho et al. | Feb 2011 | A1 |
20110029313 | Braho et al. | Feb 2011 | A1 |
20110093269 | Braho et al. | Apr 2011 | A1 |
20120239176 | Lien | Sep 2012 | A1 |
Number | Date | Country |
---|---|---|
0867857 | Sep 1998 | EP |
0905677 | Mar 1999 | EP |
1011094 | Jun 2000 | EP |
1377000 | Jan 2004 | EP |
63179398 | Jul 1988 | JP |
64004798 | Sep 1989 | JP |
04296799 | Oct 1992 | JP |
6059828 | Apr 1994 | JP |
6130985 | May 1994 | JP |
6161489 | Jun 1994 | JP |
07013591 | Jan 1995 | JP |
07199985 | Aug 1995 | JP |
11175096 | Feb 1999 | JP |
2000181482 | Jun 2000 | JP |
2001042886 | Feb 2001 | JP |
2001343992 | Dec 2001 | JP |
2001343994 | Dec 2001 | JP |
2002328696 | Nov 2002 | JP |
2003177779 | Jun 2003 | JP |
2004126413 | Apr 2004 | JP |
2004334228 | Nov 2004 | JP |
2005173157 | Jun 2005 | JP |
2005331882 | Dec 2005 | JP |
2006058390 | Mar 2006 | JP |
2002011121 | Feb 2002 | WO |
2005119193 | Dec 2005 | WO |
2006031752 | Mar 2006 | WO |
WO 2011144617 | Nov 2011 | WO |
Entry |
---|
Smith, Ronnie W., An Evaluation of Strategies for Selective Utterance Verification for Spoken Natural Language Dialog, Proc. Fifth Conference on Applied Natural Language Processing (ANLP), 1997, 41-48. |
Kellner, A., et al., Strategies for Name Recognition in Automatic Directory Assistance Systems, Interactive Voice Technology for Telecommunications Applications, IVTTA '98 Proceedings, 1998 IEEE 4th Workshop, Sep. 29, 1998. |
Chengyi Zheng and Yonghong Yan, “Improving Speaker Adaptation by Adjusting the Adaptation Data Set”; 2000 IEEE International Symposium on Intelligent Signal Processing and Communication Systems. Nov. 5-8, 2000. |
Christensen, “Speaker Adaptation of Hidden Markov Models using Maximum Likelihood Linear Regression”, Thesis, Aalborg University, Apr. 1996. |
Mokbel, “Online Adaptation of HMMs to Real-Life Conditions: A Unified Framework”, IEEE Trans. on Speech and Audio Processing, May 2001. |
Silke Goronzy, Krzysztof Marasek, Ralf Kompe, Semi-Supervised Speaker Adaptation, in Proceedings of the Sony Research Forum 2000, vol. 1, Tokyo, Japan, 2000. |
Jie Yi, Kei Miki, Takashi Yazu, Study of Speaker Independent Continuous Speech Recognition, Oki Electric Research and Development, Oki Electric Industry Co., Ltd., Apr. 1, 1995, vol. 62, No. 2, pp. 7-12. |
Osamu Segawa, Kazuya Takeda, An Information Retrieval System for Telephone Dialogue in Load Dispatch Center, IEEJ Trans. EIS, Sep. 1, 2005, vol. 125, No. 9, pp. 1438-1443. |
Number | Date | Country | |
---|---|---|---|
20140270196 A1 | Sep 2014 | US |