The instant application relates to a digital broadcast transfer system for transferring at least voice information in a digital system via a transfer path including ground waves or satellite waves. The digital broadcast transfer system includes a digital broadcast transmitting device and a digital broadcast receiving device.
In recent years, digital broadcasts that transfer information such as a voice, a picture, a character, or the like as a digital signal via a transfer path including ground waves or satellite waves have been further developed. One method for transferring a digital signal is a suggested by ISO/IEC13818-1. The ISO/IEC13818-1 describes a method for multiplexing and transferring an encoded digital signal including voice, picture, and piece of data of a program on a transmission side and receiving and reproducing of a specified program on a reception side.
The encoded voice signal and picture signal are divided by predetermined time and are provided with header information including, for example, reproduction time information, forming a packet called PES (Packetized Elementary Stream). The PES is basically divided in units of 184 bytes. The PES is additionally provided with header information including, for example, a packet identifier (PID), and is reconstructed to be a packet called a TSP (transport packet) to be multiplexed. Moreover, table information called PSI (Program Specific Information) indicating relationship between a program and a packet forming the program is multiplexed to the TSP of the voice signal or the picture signal. Defined as the PSI are four kinds of tables including a PAT (Program Association Table) and a PMT (Program Map Table). Described in the PAT is a PID of the PMT corresponding to each program, and described in the PMT is a PID of a packet in which, for example, a voice or picture signal forming the corresponding program is stored.
A receiver refers to the PAT and the PMT to thereby extract, from the TSP having a plurality of multiplexed programs, a packet forming a target program. The data packet and the PSI are stored into the TSP in a format called a section different from the PES. Extracting from the PES packet data excluding the header, etc. can provide, for example, an MPEG-2 AAC stream.
Before transferring a signal such as, for example, a voice signal to the receiving device, the signal may be encoded. As a method of encoding the voice signal, there is ISO/IEC 13818-7 (MPEG-2 Audio AAC). For the AAC standard used in the digital broadcast, the current service supports a 5.1 channel. For Japanese digital broadcasts, ARIB standards and operation specifications issued by Association of Radio Industries and Businesses are provided, which define in detail specifications of detailed methods, parameters, and operation.
An instruction for making switching manually or based on delivery programming is inputted to the sequence control unit 142. The sequence control unit 142, defining a switching point, controls the voice signal input switching unit 150 to switch an input signal from the 2-channel stereo to a 5.1-channel signal.
The voice signal encoding unit 151 encodes a signal in an MPEG-2 AAC system. For the 5.1 channel, the “3/2+LFE” is indicated by an MPEG-2 ADTS fixed header and also a downmixing coefficient is transferred by a PCE (Program Configuration Element). These information are contained in a voice signal stream.
Since voice reproduction of a typical TV receiver is usually performed through the 2-channel stereo, the receiving device 1500 is configured such that after once performing decoding processing on the 5.1 channel surround broadcast, downmixing to the 2-channel stereo signal is performed.
The demodulation unit 102 performs demodulation on broadcast waves received from the antenna 101 to reproduce a transport stream. The transport stream is forwarded to the demultiplexing unit 103. The demultiplexing unit 103 performs segmentation on the transport stream and extracts PES data and Section data from the transport stream. The section data is analyzed in the packet analysis unit 125 to extract PAT/PMT, which is used as, for example, program information. The PES data is analyzed in the packet analysis unit 110 to extract the selected stream.
The stream analyzed and selected in the packet analysis unit 110 is further analyzed in the stream information analysis unit 111 to perform segmentation to an AAC header, a basic signal, and others. If the header includes an ID for the 2-channel stereo, the basic signal is subjected to decoding processing into a 2-channel stereo signal in the AAC 2-channel decoder 112 and forwarded to selector 116 to be output as the 2-channel stereo signal.
If the header includes an ID for the 5.1 channel surround, the basic signal is subjected to decoding processing into a 5.1-channel signal in the AAC 5.1-channel decoder 113. The decoded 5.1 channel signal is then downmixed from the 5.1 channel to the 2 channel in the downmixing synthesis unit 115. A downmixing coefficient required for the downmixing at the downmixing synthesis unit 115 may be retrieved from the PCE of a stream header is used. The 2-channel stereo signal subjected to the decoding processing and downmixing in is selected by the selector 116 and outputted as a 2-channel stereo signal.
As noted above, to reproduce the 5.1 channel signal, the receiving device 1500 first performs decoding on the 5.1 channel and then performs downmixing to convert the decoded 5.1 channel signal into a 2-channel signal. As a result, the receiving device 1500 may increase the processing volume and may reduce power saving.
Therefore, there is a need for a system that allows multichannel reproduction and reduces the delay in reproducing the voice signal when the format of the voice signal changes from one channel to another (e.g., from 2-channel stereo to 5.1 channel surround signal).
In one general aspect, the instant application describes a digital broadcast transmitting device that includes a packet generation unit configured to generate packetized elementary stream (PES) data by converting an inputted voice signal into an encoded voice signal and generating a voice stream packet including the encoded voice signal; a descriptor updating unit configured to update a component descriptor to include a component type identification (ID) and a change reservation ID, the component type ID indicating an encoding format of the encoded voice signal is MPEG surround format and the change reservation ID indicating a change of a format of the encoded voice signal to the MPEG surround format; a packetizing unit configured to generate section data by packetizing the component descriptor; a multiplexing unit configured to multiplex the PES data and the section data; and a modulation unit configured to modulate and transmit multiplexed data acquired from the multiplexing unit.
The above general aspect may include one or more of the following features. The digital broadcast transmitting device may further include a sequence control unit configured to determine a timing of the change of the format of the encoded voice signal and control the descriptor updating unit in a manner such that the change reservation ID is outputted at a time before the timing of the change of the format of the encoded voice signal. The sequence control unit may be configured to control the packet generation unit in a manner such that voice in a period during which the change reservation ID is outputted is put on mute. The sequence control unit may be configured to control the descriptor updating unit in a manner such that the descriptor updating unit outputs the change reservation ID 500 milliseconds to 1 millisecond before the timing of the change of the format of the encoded voice signal.
In another general aspect, the instant application describes a digital broadcast receiving device that includes a reception unit configured to receive multiplexed broadcast data; a first packet analysis unit configured to acquire, from PES data included in the multiplexed broadcast data, a voice stream packet including an encoded voice signal; and a second packet analysis unit configured to detect, from section data included in the multiplexed broadcast data, a component descriptor including a component type identification (ID) and a change reservation ID, the component type ID indicating an encoding format of the encoded voice signal is MPEG surround format and the change reservation ID indicating a change of a format of the encoded voice signal to the MPEG surround format.
The above general aspect may include one or more of the following features. The digital broadcast receiving device may include a mode control unit configured to output a mute control signal for muting a voice upon detection of the change reservation ID by the second packet analysis unit. The digital broadcast receiving device may be configured to detect the change reservation ID before change of the format of the encoded voice signal. The digital broadcast receiving device may be configured to detect the change reservation ID 500 milliseconds to 1 millisecond before the change of the format of the encoded voice signal.
In another general aspect, the instant application describes a broadcasting transmitting and receiving system that includes the above described digital broadcast transmitting and receiving devices.
In another general aspect, the instant application describes a digital broadcast transmitting method comprising steps of: generating packetized elementary stream (PES) data by converting an inputted voice signal into an encoded voice signal and generating a voice stream packet including the encoded voice signal; updating a component descriptor to include a component type identification (ID) and a change reservation ID, the component type ID indicating an encoding format of the encoded voice signal is MPEG surround format and the change reservation ID indicating a change of a format of the encoded voice signal to the MPEG surround format; generating section data by packetizing the component descriptor; multiplexing the PES data and the section data; and modulating and transmitting multiplexed data acquired from the multiplexing step.
The method may further include steps of: determining a timing of the change of the format of the encoded voice signal, and outputting the change reservation ID at a time before the timing of the change of the format of the encoded voice signal. The method may further include a step of muting voice in a period during which the change reservation ID is outputted. Outputting the change reservation ID may include outputting the change reservation ID 500 milliseconds to 1 millisecond before the timing of the change of the format of the encoded voice signal.
In another general aspect, the instant application describes an integrated circuit including a packet generation unit configured to generate packetized elementary stream (PES) data by converting an inputted voice signal into an encoded voice signal and generating a voice stream packet including the encoded voice signal; a descriptor updating unit configured to update a component descriptor to include a component type identification (ID) and a change reservation ID, the component type ID indicating an encoding format of the encoded voice signal is MPEG surround format and the change reservation ID indicating a change of a format of the encoded voice signal to the MPEG surround format; a packetizing unit configured to generate section data by packetizing the component descriptor; a multiplexing unit configured to multiplex the PES data and the section data; and a modulation unit configured to modulate and transmit multiplexed data acquired from the multiplexing unit.
In another general aspect, the instant application describes a digital broadcast receiving method comprising steps of: receiving a multiplexed broadcast data; acquiring, from PES data included in the multiplexed broadcast data, a voice stream packet including an encoded voice signal; and detecting, from section data included in the multiplexed broadcast data, a component descriptor including a component type identification (ID) and a change reservation ID, the component type ID indicating an encoding format of the encoded voice signal is MPEG surround format and the change reservation ID indicating a change of a format of the encoded voice signal to the MPEG surround format.
In another general aspect, the instant application describes an integrated circuit including a receiving unit configured to receive multiplexed broadcast data; a first packet analysis unit configured to acquire, from PES data included in the multiplexed broadcast data, a voice stream packet including an encoded voice signal; and a second packet analysis unit configured to detect, from section data included in the multiplexed broadcast data, a component descriptor including a component type identification (ID) and a change reservation ID, the component type ID indicating an encoding format of the encoded voice signal is MPEG surround format and the change reservation ID indicating a change of a format of the encoded voice signal to the MPEG surround format.
The teachings of the instant application can also be realized as programs causing a computer to execute each of the digital broadcast transmitting method and the digital broadcast receiving method described above. Each of these programs can also be realized as a recording medium in which the programs are recorded. Then the programs can also be distributed via a transfer medium such as the Internet or a recording medium such as a DVD.
With the digital broadcast transmitting device according the instant application, a digital broadcast receiving device that receives data transmitted from the digital broadcast transmitting device can shorten time required for determining the MPEG surround broadcast and can reliably perform the determination without waiting for stream analysis. Thus, the digital broadcast receiving device can provide effect of executing decoding processing switching and mute processing in short time, for example, even upon switching from an AAC 2-channel to a 5.1-channel mode.
The receiving device of the instant application can recognize the change in the encoding format of the voice signal in advance of the actual change. Therefore, the receiving device of the instant application can further forward timing of the decoding processing and the mute processing. Furthermore, mute time inserted at time of change for abnormal voice protection can by systematically shortened.
Hereinafter, an implementation of the instant application will be described, with reference to the accompanying drawings. This implementation will be described, referring to as an example a digital broadcast transfer system using MPEG surround for a voice encoding system. This implementation is based on assumption that the MPEG standard is partially revised to perform addition for transferring a descriptor of new component type data. However, even in a case where the MPEG standard cannot be revised, since there is a region assigned as business operator regulation, this region can be newly defined by the ARIB standard. In this case, a range of standardization differs from that in the case where the MPEG standard is partially revised, but the same information transfer can be performed and the same effect can be provided in the both cases.
A system has been suggested which enables multichannel reproduction by defining as a basic signal a bit stream with a rate lowered through 2-channel downmixing and then adding additional information to the bit stream. For example, there is an MPEG surround system that allows 5.1-channel surround reproduction with approximately 96 kbps by adding information on a level difference and a phase difference between the channels to the basic signal obtained by downmixing from the multichannel to the 2 channel. This is a system standardized as ISO/IEC 23003-1.
The MPEG surround system is characterized in that a basic signal is a downmixing signal and thus it holds compatibility that permits reproduction on a conventional device without a problem and also the same level of sound quality can be realized at a lower rate than that of the AAC 5.1-channel. Thus, the MPEG surround system may be adopted as a system for allowing multichannel reproduction. Especially in, for example, a one-segment broadcast of a terrestrial digital TV mainly focusing on a low bit rate and a practical application test broadcast of a digital radio, it has been difficult or impossible to broadcast the AAC 5.1-channel due to an insufficient bit rate. However, the adoption of the MPEG surround system capable of transmission from approximately 96 kbps has made it possible to put a full-scale surround broadcast into practical use at the same level of bit rate as that of the one segment broadcast. Such MPEG surround system may also be suitable for a multimedia broadcast currently studied by use of a VHF band. In this case, it is possible to adopt the MPEG surround system in place of a conventional AAC 5.1-channel for the 5.1-channel surround broadcast.
Referring specifically to
In Japanese broadcasts, the 2 channel stereo of the MPEG-2 AAC is used as the basic signal. The AAC+SBR and the MPEG surround system as a spreading system of the MPEG-2 AAC both have a format structure in which spreading information is added onto the basic signal. A data string having these frame structures is transferred as a bursty stream. Between the systems shown in
Referring specifically to
The basic signal is outputted to the AAC 2-channel decoder 112, the SBR information is outputted to an SBR information analysis unit 117, and the channel spreading information is outputted to a channel spreading information analysis unit 122. Both the SBR information presence/absence data and the channel spreading information presence/absence data are outputted to a mode control unit 141.
A band spreading unit 118, based on the basic signal decoded by the AAC 2-channel decoder 112, copies a spectrum in a high range for band spreading. Moreover, the band spreading unit 118 performs control by use of output of the SBR information analysis unit 117 so that energy of an envelope becomes smooth on a frequency axis.
The channel spreading unit 130 performs channel spreading by use of output of the channel spreading information analysis unit 122 based on the basic signal to generate a 5.1-channel signal. The mode control unit 141 controls a selector 119 so as to select the band-spread basic signal in a case where the SBR information presence/absence data is present. Moreover, the mode control unit 141 controls a selector 121 so as to select the 5.1-channel signal in a case where the channel spreading information presence/absence data is present. The 2-channel signal of the selector 119 is converted into a pseudo-surround signal in the 5.1-channel pseudo-surround unit 120 and outputted to the selector 121. Such configuration is applied to, for example, an in-vehicle receiver.
The receiver performs AAC 2-channel data processing as the basic signal (Step S15). Then, the receiver determines whether or not the channel spreading information is present in a region following the basic signal (Step S16). This determination is based on a change from a result of the previous determination, and thus requires at least a period of a delivery cycle. Accuracy of reliable determination performed when an error is assumed increases in proportion to the number of times of repetition. If there is no change, the processing returns to Step S13. If there is a change, the receiver promptly performs voice mute processing and initialization of the channel spreading unit 130 (step S17). The receiver waits for a predetermined period of time in view of an appropriate margin for a period of time during which abnormal voice may be generated, and holds the mute (Step S18). Next, the receiver performs voice demuting (mute release) and outputs a reproduced signal (Step S19).
As described above, the MPEG surround system is advantageous for a 2-channel device because a 2-channel basic signal can be reproduced by ignoring a channel spreading portion. As such, the MPEG surround system may be suitable for portable devices. The MPEG surround system may be configured such that the basic signal and a header have the same configuration as that of a 2-channel AAC in order to avoid erroneous operation of the legacy 2-channel device. The difference therebetween may be the presence/absence of the channel spreading region in the MPEG surround system.
This structure may be beneficial for the 2-channel device that does not require format determination. However, such structure may not be beneficial for the 5.1-channel device because format determination cannot be achieved through header analysis even when the format determination is required immediately. Instead, in the 5.1-channel device determining whether or not the channel spreading information is in the region following the basic signal is repeatedly performed, which requires considerable time. An increase in detection time required for format determination can cause an abnormal voice to be generated at the start portion of the program.
In
On a broadcast delivery side, outputting of voice of the next program is started at timing T03 that is after passage of mute time at the time of switching. That is, the timing T03 serves as a head of the program. A point of head finding for reception and reproduction is from the timing T03 to timing T06 that passes through decoding delay.
A temporal position of the timing T02 varies depending on factors such as the fact that it requires time for determining presence/absence of a mode change with some level of broadcast wave reception. A delay of T02 as in the figure consequently delays the timing T05 behind the timing T06, which causes interruption of voice at a head of the program for a period of time corresponding to the delay. Specifically, the voice is interrupted between the timing T06 and the timing T05. Moreover, it is also assumed that even with a 500 ms portion where muting occurs, demute data turns into noise due to a reception error. Thus, there remains a risk of abnormal voice between the mode change detection on the reception side and mute start.
Assuming that a newly developed MPEG surround system is adopted, it is possible to assume a mode of operation that permits coexistence of the MPEG surround system and the MPEG-2 AAC 2-channel system. For a multimedia broadcast, it is selected in units of time or units of program for broadcasting. For example, in a baseball live broadcast, the MPEG surround system is used to provide reality, and in a commercial broadcast put in the middle thereof, the typical AAC 2-hannel is used.
In this case, a problem may occur at time of switching. Since continuous voice output without interruption may be difficult to achieve, it is possible to expect some mute time. However, if the detection time for detecting the switching point is longer than the preset mute time, a starting portion of the program after switching may be interrupted. This in turn may cause an abnormal voice to be generated at the start portion of the program after the switching.
The instant application can reduce the time required for detecting the switching point (e.g., a point where the encoding format of the voice signal changes from a first format to a second format). To this end, the instant application describes a digital broadcast transmitting device, a digital broadcast receiving device, and a digital broadcast transmitting and receiving system capable of performing processing and determination in accordance with an encoding system of a voice signal transferred in a digital broadcast receiver.
The digital broadcast transmitting device 60 includes a voice signal input switching unit 50, a voice signal encoding unit 51, a packetizing unit 52, a multiplexing unit 55, a sequence control unit 42, a component descriptor updating unit 57, a packetizing unit 54, and a modulation unit 56. The voice signal encoding unit 51 and the packetizing unit 52 realize processing performed by a packet generation unit in the digital broadcast transmitting device 60. Moreover, the packetizing unit 54 is one example of a packetizing unit in the digital broadcast transmitting device 60.
A 2-channel stereo or a 5.1-channel surround signal forming a program is inputted to the voice signal input switching unit 50, in which switching selection is made, and then is inputted to the voice signal encoding unit 51 to be converted into a digital signal. The digital signal obtained through the conversion is provided with header information and then is converted into a PES in the packetizing unit 52.
At the same time, the sequence control unit 42 controls the voice signal input switching unit 50 manually or based on a delivery programming instruction and also inputs the MPEG surround type ID and the change reservation ID as the component type data to the component descriptor updating unit 57. The component descriptor updating unit 57, based on the inputted component type data, updates the voice component descriptor to be outputted to the packetizing unit 54. The updated voice component descriptor includes the component type ID and the change reservation ID. Moving forward the “voice component descriptor” is expressed simply as “component descriptor” in some cases.
Data outputted from the component descriptor updating unit 57 is inputted with other PAT and PMT to the packetizing unit 54. The packetizing unit 54 packetizes these pieces of data in a section format. To this end, the component descriptor is packetized as encoding information in the section format separately from a PES packet of the voice signal and indicates to the receiving device whether the voice signal is encoded by the AAC or the MPEG surround. As a result, the receiving device of the instant application can recognize whether the encoding format of the voice signal is the AAC or the MPEG surround before the receiving device begins to decode the basic signal. Consequently, the receiving device of the instant application can reliably perform the decoding processing on the voice signal.
In contrast, the receiving device of the MPEG surround system described at the beginning of the detailed description of the instant application can first recognize whether the voice signal is encoded by the AAC or the MPEG surround after extracting one frame of basic signal from a plurality of packets and decoding the basic signal. Consequently, the receiving device of the MPEG surround system described at the beginning of the detailed description of the instant application may not reliably perform the decoding processing on the voice signal. For example, such receiving device may cause an abnormal voice to be generated at the start portion of the program after the switching.
When the change point has been determined, the sequence control unit 42, as pre-change processing (Step S04), outputs a change reservation ID and also preferably controls the voice signal encoding unit 51 to thereby start processing such as suitable fade-out on the voice signal. After passage of predetermined time, the sequence control unit 42, as change processing (Step S05), controls the voice signal encoding unit 51 to thereby perform voice PES data switching. Then, the sequence control unit 42, as post-change processing (Step S06), stops delivery of the change reservation ID and also controls the voice signal encoding unit 51 to thereby perform suitable fade-in on the voice signal after the change and perform demute processing.
Note that the change reservation ID is outputted at timing that is ahead of or behind the aforementioned change point by predetermined time. For example, the delivery of the change reservation ID is started at timing that is ahead of the change point by time corresponding to any of 500 milliseconds to 1 millisecond.
The sequence control unit 42 stops the change reservation ID at the post-change processing (Step S06) and releases voice mute. The change reservation ID “0x17” reflects the presence/absence of the MPEG surround changes. In one example, the change reservation ID “0x17” means that the 2-channel stereo is currently used, but a change to the 5.1-channel MPEG surround is to be made.
The change reservation ID “0x17” may also be used to reflect a change from the MPEG surround to the 2-channel stereo. In this scenario, the change reservation ID means that the 5.1 channel MPEG surround is currently being used, but a change to the 2-channel stereo is to be made.
The various component types of the voice component descriptor according to the current standard are shown in
For example, as shown by a bold line, transition occurs from the mode M03 (normal with a sampling frequency of 24 Hz) to the mode M13 where the SBR is added. Then, the SBR is stopped to achieve transition to the mode M06 where the sampling frequency is 48 kHz. From mode M06, transition to the mode M26 where the MPEG surround is added occurs, and then the SBR is further added to achieve transition to the mode M33. Making the additions shown in
As described above, the voice component descriptor spreading makes it possible to deliver multiplexed data with various component type IDs and change reservation IDs. As a result, the digital broadcast transmitting device 60 of the instant application can easily identify to the digital broadcast receiving device point in time the voice signal changes from one format to another. Next, a receiving device that receives a broadcast transmitted from the digital broadcast transmitting device 60 will be described.
The digital broadcast receiving device 70 includes a packet analysis unit 10 that analyzes the PES data, a stream information analysis unit 11, an AAC 2-channel decoder 12, an SBR information analysis unit 17, a channel spreading information analysis unit 22, a band spreading unit 18, a selector 19, a channel spreading unit 31, a 5.1-channel pseudo-surround unit 20 that converts a 2 ch signal into a 5.1 ch pseudo-surround signal, a selector 21, a mode control unit 41, a packet analysis unit 25 that analyzes section data, and an ID detection unit 27. The digital broadcast receiving device 70 further includes an antenna, a demodulation unit, and a demultiplexing unit (not shown). These components were described with respect to the receiving device 1500 shown in
The packet analysis unit 10 is one example of a first packet analysis unit in the digital broadcast receiving device of the instant application. The packet analysis unit 25 and the ID detection unit 27 may perform processing of a second packet analysis unit in the digital broadcast receiving device of the instant application. Digital broadcast waves received through the antenna are subjected to reception processing in the demodulation unit to output a multiplexed TSP string. In the demultiplexing unit, PES data and section data are outputted from the received TSP string.
The PES data is inputted to the packet analysis unit 10. The packet analysis unit 10 acquires from the PES data a voice stream packet including an encoded voice signal. The acquired voice stream packet is analyzed by the stream information analysis unit 11. The stream analysis unit 11 outputs a basic signal, SBR information, SBR information presence/absence data, and channel spreading information presence/absence data.
The basic signal is outputted to the AAC 2-channel decoder 12, the SBR information is outputted to the SBR information analysis unit 17, and the channel spreading information is outputted to the channel spreading information analysis unit 22. The SBR information presence/absence data and the channel spreading presence/absence data are both outputted to the mode control unit 41.
The band spreading unit 18, based on the basic signal decoded in the AAC 2-channel decoder 12, copies a spectrum in a high range to achieve band spreading. Moreover, the band spreading unit 18 performs control so that energy of an envelope smoothened by use of the output of the SBR information analysis unit 17. The channel spreading unit 31, based on at least the basic signal, performs channel spreading by use of output of the channel spreading information analysis unit 22 to generate a 5.1-channel signal.
After the encoding information is extracted from the section data in the packet analysis unit 25, the encoding information is inputted to the ID detection unit 27. The ID detection unit 27 detects an added component type ID and change reservation ID, which are then inputted to the mode control unit 41.
Included as contents of the added component type ID and change reservation ID are type IDs corresponding to the SBR information presence/absence data and the channel spreading information presence/absence data, and thus their information are consequently acquired together with results of the stream information analysis unit 11. However, acquisition time may differ. In another implementation, the mode control unit 41 is provided with the component type ID and the change reservation ID and not with the SBR information presence/absence data and the channel spreading presence/absence data.
In either case, based on these pieces of information, the mode control unit 41 controls the selector 19 so that the band-spread basic signal is selected in a case where the voice signal is SBR-provided. Moreover, the mode control unit 41 controls the selector 21 so that the 5.1-channel signal is selected in a case where the voice signal is MPEG surround-provided.
In the receiving device 70, the change reservation ID can be detected before the timing of the change of the format of the encoded voice signal. As a result, a mute control signal for previously and gradually muting the voice in a fade-out manner to achieve muting can be outputted from the mode control unit 41 to a voice output unit (not shown). Moreover, at the same time, the change reservation ID is outputted as a signal for the initialization of the channel spreading unit 31. That is, the change reservation ID is also used for speeding up processing performed upon proceeding to a channel spreading mode of the MPEG surround.
This therefore shorten a processing period which was required in the process 2100 for discrimination between the 2 channel and the MPEG surround based on a change from a result of the previous determination.
In
To this end, the instant application describes a digital broadcast transmitting device, a digital broadcast receiving device, and a digital broadcast transmitting and receiving system capable of performing processing and determination in short time in accordance with an encoding format of a transferred voice signal in a digital broadcast receiver. The instant application is suitable for a digital broadcast transfer system that digitally transfers information such as voice, a picture, or a character and also for a digital broadcast transmitting device and a digital broadcast receiving device that form the digital broadcast transfer system. The instant application is more specifically suitable for a digital broadcast receiving device such as a digital TV, a set top box, a car navigation system, or a portable one-segment TV.
Other implementations are contemplated. For example, the teachings of the instant application may be realized by a computer system including a microprocessor, a ROM (Read Only Memory), a RAM (Random Access Memory), an accumulated memory unit, a display, a man-machine interface, etc. Each device is so configured as to achieve its function through operation in accordance with a computer program stored dynamically or in a fixed manner. All or part of the components forming the devices 60, 70, and 80 described above may be formed of a system LSI. More specifically, it is a computer system so formed as to include a microprocessor, a ROM, a RAM, etc. The system LSI achieves its function by storing a computer program and operating in accordance with the computer program.
Additionally or alternatively, The teachings of the instant application may be realized by a detachable IC card or a separate module. The IC card or the module is a computer system so formed as to include a microprocessor, a ROM, a RAM, etc. It achieves its function by storing computer program and operating in accordance with the computer program.
Additionally or alternatively, the teachings of the instant application may be realized as a method including processing executed by the digital broadcast transmitting device and the digital broadcast receiving device of the instant application. Moreover, the teachings of the instant application may be realized by a computer program realizing the method by a computer, or may be realized by a digital signal including the computer program.
Additionally or alternatively, the teachings of the instant application can be realized as a recording medium in which each of these programs is recorded.
Other implementations are contemplated.
Number | Date | Country | Kind |
---|---|---|---|
2009-202097 | Sep 2009 | JP | national |
This is a continuation application of PCT Patent Application No. PCT/JP2010/003628 filed on May 31, 2010, designating the United States of America, which is based on and claims priority of Japanese Patent Application No. 2009-202097 filed on Sep. 1, 2009. The entire disclosures of the above-identified applications, including the specifications, drawings and claims are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5742688 | Ogawa et al. | Apr 1998 | A |
6504826 | Kato et al. | Jan 2003 | B1 |
7693183 | Oh et al. | Apr 2010 | B2 |
7693706 | Oh et al. | Apr 2010 | B2 |
7702407 | Oh et al. | Apr 2010 | B2 |
7706905 | Oh et al. | Apr 2010 | B2 |
7761177 | Oh et al. | Jul 2010 | B2 |
7797163 | Pang et al. | Sep 2010 | B2 |
7987097 | Pang et al. | Jul 2011 | B2 |
8090586 | Oh et al. | Jan 2012 | B2 |
8131134 | Sirivara et al. | Mar 2012 | B2 |
8185403 | Pang et al. | May 2012 | B2 |
20050099994 | Kato et al. | May 2005 | A1 |
20080219475 | Oh et al. | Sep 2008 | A1 |
20080221907 | Pang et al. | Sep 2008 | A1 |
20080228475 | Oh et al. | Sep 2008 | A1 |
20080228499 | Oh et al. | Sep 2008 | A1 |
20080228501 | Pang et al. | Sep 2008 | A1 |
20080235006 | Pang et al. | Sep 2008 | A1 |
20080255857 | Pang et al. | Oct 2008 | A1 |
20080275711 | Oh et al. | Nov 2008 | A1 |
20080279388 | Oh et al. | Nov 2008 | A1 |
20080294444 | Oh et al. | Nov 2008 | A1 |
20080304513 | Oh et al. | Dec 2008 | A1 |
20080310640 | Oh et al. | Dec 2008 | A1 |
20080319765 | Oh et al. | Dec 2008 | A1 |
20090003611 | Oh et al. | Jan 2009 | A1 |
20090003635 | Pang et al. | Jan 2009 | A1 |
20090006105 | Oh et al. | Jan 2009 | A1 |
20090006106 | Pang et al. | Jan 2009 | A1 |
20090028344 | Pang et al. | Jan 2009 | A1 |
20090055196 | Oh et al. | Feb 2009 | A1 |
20090119110 | Oh et al. | May 2009 | A1 |
20090164227 | Oh et al. | Jun 2009 | A1 |
20090216541 | Oh et al. | Aug 2009 | A1 |
20090225991 | Oh et al. | Sep 2009 | A1 |
20090234656 | Oh et al. | Sep 2009 | A1 |
20090274308 | Oh et al. | Nov 2009 | A1 |
20090287494 | Pang et al. | Nov 2009 | A1 |
20100061466 | Gozen et al. | Mar 2010 | A1 |
20110178808 | Pang et al. | Jul 2011 | A1 |
20110182431 | Pang et al. | Jul 2011 | A1 |
20110196687 | Pang et al. | Aug 2011 | A1 |
20110246208 | Pang et al. | Oct 2011 | A1 |
Number | Date | Country |
---|---|---|
05-191529 | Jul 1993 | JP |
07-222297 | Aug 1995 | JP |
2000-138877 | May 2000 | JP |
2001-036999 | Feb 2001 | JP |
2006-065958 | Mar 2006 | JP |
2007-310087 | Nov 2007 | JP |
2008-286904 | Nov 2008 | JP |
2008-542815 | Nov 2008 | JP |
WO 2006126843 | Nov 2006 | WO |
WO 2006126844 | Nov 2006 | WO |
WO 2006126855 | Nov 2006 | WO |
WO 2006126856 | Nov 2006 | WO |
WO 2006126857 | Nov 2006 | WO |
WO 2006126858 | Nov 2006 | WO |
WO 2006126859 | Nov 2006 | WO |
WO 2007013775 | Feb 2007 | WO |
WO 2007013780 | Feb 2007 | WO |
WO 2007013781 | Feb 2007 | WO |
WO 2007013783 | Feb 2007 | WO |
WO 2007013784 | Feb 2007 | WO |
WO 2007032646 | Mar 2007 | WO |
WO 2007032647 | Mar 2007 | WO |
WO 2007032648 | Mar 2007 | WO |
WO 2007032650 | Mar 2007 | WO |
WO 2007083952 | Jul 2007 | WO |
WO 2007083953 | Jul 2007 | WO |
WO 2007083955 | Jul 2007 | WO |
WO 2007083956 | Jul 2007 | WO |
WO 2007083957 | Jul 2007 | WO |
WO 2007083958 | Jul 2007 | WO |
WO 2007083959 | Jul 2007 | WO |
WO 2007083960 | Jul 2007 | WO |
WO 2007114594 | Oct 2007 | WO |
WO 2007114624 | Oct 2007 | WO |
WO 2008117524 | Oct 2008 | WO |
Entry |
---|
“ISO/IEC 13818-1 Second Edition, Information Technology—Generic Coding of Moving Pictures and Associated Audio Information: Systems”, Dec. 1, 2000. |
“ISO/IEC 13818-7 Third Edition, Information Technology—Generic Coding of Moving Pictures and Associated Audio Information—Part 7: Advanced Audio Coding (AAC)”, Oct. 15, 2004. |
“ISO/IEC FDIS 23003-1, Information Technology—MPEG Audio Technologies—Part 1: MPEG Surround (Voting Terminates on: Jan. 13, 2007)”. 2006. |
“Service Information for Digital Broadcasting System, Arib Standard: ARIB STD-B10, version 4.8”, Association of Radio Industries and Businesses, Apr. 26, 2010, with partial English translation. |
“Video Coding, Audio Coding and Multiplexing Specifications for Digital Broadcasting, Arib Standard: ARIB STD-B32, Version 2.2”, Association of Radio Industries and Businesses, Jul. 29, 2009, with partial English translation. |
Number | Date | Country | |
---|---|---|---|
20120226494 A1 | Sep 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2010/003628 | May 2010 | US |
Child | 13408726 | US |