Claims
- 1. A data embedding method for embedding optional data in encoded voice code obtained by encoding voice by a prescribed voice encoding scheme, comprising the steps of:
determining whether data embedding condition is satisfied using a first element code from among element codes constituting the encoded voice code, and a threshold value; and embedding optional data in the encoded voice code by replacing a second element code with the optional data if the data embedding condition is satisfied.
- 2. The data embedding method according to claim 1, further comprising a step of comparing a dequantized value of the first element code with the threshold value, and determining whether data embedding condition is satisfied based upon result of the comparison.
- 3. The data embedding method according to claim 1, wherein the first element code is a fixed codebook gain code and the second element code is a noise code, which is index information of a fixed codebook; and
when a dequantized value of the fixed codebook gain code is smaller than the threshold value, it is determined that the data embedding condition is satisfied and the noise code is replaced with optional data, whereby the optional data is embedded in the encoded voice code.
- 4. The data embedding method according to claim 1, wherein the first element code is a pitch-gain code and the second element code is a pitch-lag code, which is index information of an adaptive codebook; and
when a dequantized value of the pitch-gain code is smaller than the threshold value, it is determined that the data embedding condition is satisfied and the pitch-lag code is replaced with optional data, whereby the optional data is embedded in the encoded voice code.
- 5. The data embedding method according to claim 1, wherein a portion of the embedded data is adopted as data-type identification data, and the type of the embedded data is specified by this data-type identification data.
- 6. The data embedding method according to claim 1, wherein a plurality of the threshold values are set and, on the basis of dequantized value of the first element code, embedded data is distinguished as being a data sequence in its entirety or a data/control code sequence, which is a format that is capable of identifying a distinction between data and a control code.
- 7. An embedded-data extracting method for extracting data embedded in encoded voice code that has been encoded by a prescribed voice encoding scheme, comprising the steps of:
determining whether data embedding condition is satisfied using a first element code from among element codes constituting the encoded voice code, and a threshold value; and if the data embedding condition is satisfied, determining that data has been embedded in a second element code portion of the encoded voice code and extracting this embedded data.
- 8. The embedded-data extracting method according to claim 7, further comprising a step of comparing a dequantized value of the first element code with the threshold value, and determining whether data embedding condition is satisfied based upon result of the comparison.
- 9. The embedded-data extracting method according to claim 7, wherein the first element code is a fixed codebook gain code and the second element code is a noise code, which is index information of a fixed codebook; and
when a dequantized value of the fixed codebook gain code is smaller than the threshold value, it is determined that optional data has been embedded in the noise code portion and this embedded data is extracted.
- 10. The embedded-data extracting method according to claim 7, wherein the first element code is a pitch-gain code and the second element code is a pitch-lag code, which is index information of an adaptive codebook; and
when a dequantized value of the pitch-gain code is smaller than the threshold value, it is determined that optional data has been embedded in the pitch-lag code portion and this embedded data is extracted.
- 11. The embedded-data extracting method according to claim 7, wherein a portion of the embedded data is adopted as data-type identification data, and the type of the embedded data is specified by this data-type identification data.
- 12. The embedded-data extracting method according to claim 7, wherein a plurality of the threshold values are set and, on the basis of dequantized value of the first element code, embedded data is distinguished as being a data sequence in its entirety or a data/control code sequence, which is a format that is capable of identifying a distinction between data and a control code.
- 13. A data embedding/extracting method in a system having a voice encoding apparatus for encoding voice according to a prescribed voice encoding scheme and embedding optional data in encoded voice code thus obtained, and a voice reproducing apparatus for extracting embedded data from encoded voice code and reproducing voice from this encoded voice code, comprising the steps of:
defining beforehand a first element code and a threshold value used to determine whether data has been embedded or not, and a second element code in which data will be embedded based upon the result of the determination; when data is to be embedded, determining whether data embedding conditions are satisfied using the first element code and the threshold value, and embedding optional data in the encoded voice code by replacing the second element code with the optional data if the data embedding condition is satisfied; and when data is to be extracted, determining whether data embedding condition is satisfied using the first element code and the threshold value, determining that optional data has been embedded in the second element code portion of the encoded voice code if the data embedding condition is satisfied, and extracting the embedded data.
- 14. The data embedding/extracting method according to claims 13, further comprising a step of comparing a dequantized value of the first element code with the threshold value, and determining whether data embedding condition is satisfied based upon result of the comparison.
- 15. The data embedding/extracting method according to claims 13, wherein the first element code is a fixed codebook gain code and the second element code is a noise code, which is index information of a fixed codebook; and
when a dequantized value of the fixed codebook gain code is smaller than the threshold value, it is determined that the data embedding condition is satisfied and the noise code is replaced with optional data, whereby the optional data is embedded in the encoded voice code, or it is determined that optional data has been embedded in the noise code portion and this embedded data is extracted.
- 16. The data embedding/extracting method according to claims 13, wherein the first element code is a pitch-gain code and the second element code is a pitch-lag code, which is index information of an adaptive codebook; and
when a dequantized value of the pitch-gain code is smaller than the threshold value, it is determined that the data embedding condition is satisfied and the pitch-lag code is replaced with optional data, whereby the optional data is embedded in the encoded voice code, or it is determined that optional data has been embedded in the pitch-lag code portion and this embedded data is extracted.
- 17. The data embedding/extracting method according to claims 13, wherein a portion of the embedded data is adopted as data-type identification data, and the type of the embedded data is specified by this data-type identification data.
- 18. The data embedding/extracting method according to claims 13, wherein a plurality of the threshold values are set and, on the basis of dequantized value of the first element code, embedded data is distinguished as being a data sequence in its entirety or a data/control code sequence, which is a format that is capable of identifying a distinction between data and a control code.
- 19. A data embedding apparatus for embedding optional data in encoded voice code obtained by encoding voice according to a prescribed voice encoding scheme, comprising:
an embedding decision unit for determining whether data embedding condition is satisfied using a first element code from among element codes constituting the encoded voice code, and a threshold value; and a data embedding unit for embedding optional data in the encoded voice code by replacing a second element code with the optional data if the data embedding condition is satisfied.
- 20. The data embedding apparatus according to claim 19, wherein said embedding decision unit includes:
a dequantizer for de-uantizing the first element code; a comparator for comparing a dequantized value, which is obtained by dequantization by said dequantizer, with the threshold value; and a determination unit for determining whether data embedding condition is satisfied based upon result of the comparison by said comparator.
- 21. The data embedding according to claim 19, wherein the first element code is a fixed codebook gain code and the second element code is a noise code, which is index information of a fixed codebook; and
said embedding decision unit determines that the data embedding condition is satisfied when a dequantized value of the fixed codebook gain code is smaller than the threshold value.
- 22. The data embedding apparatus according to claim 19, wherein the first element code is a pitch-gain code and the second element code is a pitch-lag code, which is index information of an adaptive codebook; and
said embedding decision unit determines that the data embedding condition is satisfied when a dequantized value of the pitch-gain code is smaller than the threshold value.
- 23. The data embedding apparatus according to claim 19, further comprising an embed data generating unit for generating embed data, a portion of which is type information that specifies the type of data.
- 24. The data embedding apparatus according to claim 19, wherein on the basis of dequantized value of the first element code, said data embedding unit decides to embed a data/control code sequence, which is a format that is capable of identifying a distinction between data and a control code, or only a data sequence.
- 25. A data extracting apparatus for extracting data embedded in encoded voice code that has been encoded according to a prescribed voice encoding scheme, comprising:
a demultiplexer for demultiplexing element codes constituting the encoded voice code; an embedding decision unit for determining whether data embedding condition is satisfied using a first element code from among the element codes and a threshold value; and an embedded-data extracting unit for determining that optional data has been embedded in a second element code portion of the encoded voice code if the data embedding condition is satisfied, and extracting the embedded data.
- 26. The data extracting apparatus according to claim 25, wherein said embedding decision unit includes:
a dequantizer for dequantizing the first element code; a comparator for comparing a dequantized value, which is obtained by dequantization by said dequantizer, with the threshold value; and a determination unit for determining whether data embedding condition is satisfied based upon result of the comparison by said comparator.
- 27. The data extracting apparatus according to claim 25, wherein the first element code is a fixed codebook gain code and the second element code is a noise code, which is index information of a fixed codebook; and
said embedding decision unit determines that the data embedding condition is satisfied when a dequantized value of the fixed codebook gain code is smaller than the threshold value.
- 28. The data extracting apparatus according to claim 25, wherein the first element code is a pitch-gain code and the second element code is a pitch-lag code, which is index information of an adaptive codebook; and
said embedding decision unit determines that the data embedding condition is satisfied when a dequantized value of the pitch-gain code is smaller than the threshold value.
- 29. A voice encoding/decoding system for encoding voice according to a prescribed voice encoding scheme and embedding optional data in encoded voice code thus obtained, and for extracting embedded data from the encoded voice code and reproducing voice from this encoded voice code, comprising:
a voice encoding apparatus for embedding optional data in encoded voice code obtained by encoding voice according to a prescribed voice encoding scheme; and a voice decoding apparatus for reproducing voice by applying decoding processing to encoded voice code that has been encoded by a prescribed voice encoding scheme, and extracting data that has been embedded in this encoded voice code; said voice encoding apparatus including:
an encoder for encoding voice according to a prescribed voice encoding scheme; an embedding decision unit for determining whether data embedding condition is satisfied using a first element code from among element codes constituting the encoded voice code, and a threshold value; and a data embedding unit for embedding optional data in the encoded voice code by replacing a second element code with the optional data if the data embedding condition is satisfied; and said voice decoding apparatus includes:
a demultiplexer for demultiplexing encoded voice code into element codes; an embedding decision unit for determining whether data embedding condition is satisfied using a first element code from among element codes constituting received encoded voice code, and a threshold value; an embedded-data extracting unit for determining that optional data has been embedded in a second element code portion of the encoded voice code if the data embedding condition is satisfied, and extracting the embedded data; and a decoder for decoding the received encoded voice code and reproducing voice; wherein the first element code and threshold value used to determine whether data has been embedded or not, and the second element code in which data will be embedded based upon the result of the determination, are defined beforehand in said voice encoding apparatus and said voice decoding apparatus.
- 30. The voice encoding/decoding system according to claim 29, wherein said embedding decision unit includes:
a dequantizer for dequantizing the first element code; a comparator for comparing a dequantized value, which is obtained by dequantization by said dequantizer, with the threshold value; and a determination unit for determining whether data embedding condition is satisfied based upon result of the comparison by said comparator.
- 31. The voice encoding/decoding system according to claim 29, wherein the first element code is a fixed codebook gain code and the second element code is a noise code, which is index information of a fixed codebook; and
said embedding decision unit determines that the data embedding condition is satisfied when a dequantized value of the fixed codebook gain code is smaller than the threshold value.
- 32. The voice encoding/decoding system according to claim 29, wherein the first element code is a pitch-gain code and the second element code is a pitch-lag code, which is index information of an adaptive codebook; and
said embedding decision unit determines that the data embedding condition is satisfied when a dequantized value of the pitch-gain code is smaller than the threshold value.
- 33. A digital voice communication system for encoding voice by a prescribed voice encoding scheme and transmitting the encoded voice, comprising:
means for analyzing voice data obtained by encoding input voice; means for embedding any code in a specific segment of a portion of the voice data in accordance with result of the analysis; and means for transmitting the embedded data as voice data; whereby additional data is transmitted at the same time as ordinary voice.
- 34. A digital voice communication system for receiving transmitted voice data, which has been obtained by encoding voice by a prescribed voice encoding scheme and transmitting the encoded voice as voice data, comprising:
means for analyzing the received voice data; and means for extracting code from a specific segment of a portion of the voice data in accordance with result of the analysis; whereby additional data is received at the same time as ordinary voice.
- 35. A digital voice communication system for encoding voice by a prescribed voice encoding scheme and transmitting the encoded voice, and for receiving transmitted voice data, which has been obtained by encoding voice by a prescribed voice encoding scheme and transmitting the encoded voice as voice data, the system having a terminal device comprising a transmitter and a receiver;
said transmitter including:
means for analyzing data obtained by encoding input voice; means for embedding any code in a specific segment of a portion of the voice data in accordance with result of the analysis; and means for transmitting the embedded data as voice data; and said receiver including:
means for analyzing the received voice data; and means for extracting code from a specific segment of a portion of the voice data in accordance with result of the analysis; whereby additional data is transmitted between terminal devices bi-directionally at the same time as ordinary voice via a network.
- 36. The system according to claim 35, wherein said transmitter further includes means for generating the code for embedding using an image or personal information possessed by a user terminal; and
said receiver further includes means for extracting and outputting the embedded code; whereby multimedia transmission is made possible in the form of a voice call.
- 37. The system according to claim 35, wherein said transmitter further includes means for adopting a unique code as the code for embedding, wherein the unique code is that of a terminal employed by the user on the transmitting side or that of the user per se; and
said receiver further includes means for extracting an embedded code and discriminating its content.
- 38. The system according to claim 35, wherein said transmitter further includes means for adopting key information as the code for embedding; and
said receiver further includes:
means for extracting the key information; and means for enabling only a specific user to decompress voice data using the extracted code information.
- 39. The system according to claim 35, wherein said transmitter further includes means for adopting relation address information as the code for embedding; and
said receiver further includes:
means for extracting the relation address information; and means for telephoning an information provider or transferring a mail to an information provider by a single click using the relation address information.
- 40. A digital voice communication system for encoding. voice by a prescribed voice encoding scheme and transmitting the encoded voice, and for receiving transmitted voice data, which has been obtained by encoding voice by a prescribed voice encoding scheme and transmitting the encoded voice as voice data, the system comprising:
a terminal device; and a server device, which is connected to a network, for relaying voice data between terminal devices; said terminal device including:
voice encoding means for encoding input voice; means for transmitting encoded voice code data; means for analyzing received voice data; and means for extracting code from a specific segment of a portion of the voice data in accordance with result of the analysis; and said server device includes:
means for receiving data exchanged mutually between terminal devices and determining whether the data is voice data; means for analyzing voice data if the received data is voice data; and means for embedding any code in a specific segment of a portion of the voice data in accordance with result of the analysis, and transmitting the resultant voice data; whereby a terminal device that has received data via said server device extracts and outputs the code embedded by said server device.
- 41. A digital voice storage system for encoding voice by a prescribed voice encoding scheme and storing the encoded voice, comprising:
means for analyzing voice data obtained by encoding input voice; means for embedding any code in a specific segment of a portion of the voice data in accordance with result of the analysis; and means for storing the embedded data as voice data; whereby additional information also is stored at the same time that ordinary digital voice is stored.
- 42. A digital voice storage system for encoding voice by a prescribed voice encoding scheme and storing the encoded voice, comprising:
means for embedding any code in a portion of encoded voice data and storing the resultant voice data; means for analyzing the stored voice data when the stored voice data is decoded; and means for extracting the embedded code from a specific segment of the stored data in accordance with result of the analysis.
- 43. A digital voice storage system for encoding voice by a prescribed voice encoding scheme and storing the encoded voice, comprising:
means for analyzing voice data obtained by encoding input voice; means for embedding any code in a specific segment of a portion of the voice data in accordance with result of the analysis; means for storing the embedded data as voice data; means for analyzing the voice data when the stored voice data is decoded; and means for extracting the embedded code from the specific segment of the voice data in accordance with result of the analysis.
- 44. The system according to claim 43, wherein the embedded code is speaking-party identifying information or storage-date information;
said system further comprising means for retrieving stored voice data, which is to be decompressed, using this information.
Priority Claims (2)
Number |
Date |
Country |
Kind |
JP2002- 026958 |
Feb 2002 |
JP |
|
JP2003- 015538 |
Jan 2003 |
JP |
|
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is a continuation-in-part of our copending application Ser. No. 10/278,108 filed on Oct. 22, 2002, the disclosure of which is hereby incorporated by reference.
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10278108 |
Oct 2002 |
US |
Child |
10357323 |
Feb 2003 |
US |