The disclosure relates to the field of audio processing. In particular, the disclosure relates to techniques for audio processing in HbbTV terminal devices, including techniques for leveling main and supplementary audio tracks from a HbbTV service.
HbbTV (Hybrid Broadcast Broadband TV) is an industry standard (ETSI TS 102 796, e.g., version V1.5.1 or any previous or subsequent versions) that provides a technology platform for seamlessly combining TV services delivered via broadcast with services delivered via broadband (or any suitable IP connection in general).
HbbTV services may be used to send different versions of an audio track or supplementary audio tracks that could be mixed with the main audio track. When the broadcast TV feed is first processed by a basic Set Top Box (STB) connected to an HbbTV enabled TV (as an example of a HbbTV terminal device), HbbTV hyperlinks may be added to the main A/V content so that the HbbTV enabled TV can process the hyperlinks and offer to the viewer the associated supplementary content/services.
When the supplementary service consist in delivery of a supplementary audio track to be mixed over the main audio track (e.g., as an audio commentary) or to temporarily replace the main audio track (e.g., as part of a targeted advertising segment), it is important that the mix is made on “consistent” audio level. However, the source STB may perform independent volume control on the main decoded track so that the HbbTV enabled TV may be unaware of the audio leveling performed by the STB. Thus, depending on the audio leveling performed by the STB, audio mixing of main and supplementary tracks by the HbbTV enabled TV may not be at a consistent audio level, thereby adversely impacting (end) user experience.
Thus, there is a need for improved methods of audio processing in HbbTV terminal devices that avoid level mismatch between main and supplementary audio tracks.
In view of the above, the present disclosure provides a method of audio processing in a HbbTV terminal device, as well as a corresponding apparatus (e.g., HbbTV terminal device), computer program, and computer-readable storage medium, having the features of the respective independent claims.
According to an aspect of the disclosure, a method of audio processing in a HbbTV terminal device is provided. The HbbTV terminal device may be a hybrid device in the sense of the HbbTV standard. That is, the HbbTV terminal device may conform to the HbbTV standard (ETSI TS 102 796 in any of its versions, e.g., any of versions 1.1.1 to 1.5.1, and any upcoming versions). The method may include receiving a decoded broadcast feed. The decoded broadcast feed may be generated by a decoder device and/or may be received from the decoder device, such as a set top box (STB), for example. The decoder device may be coupled to the HbbTV terminal device via a digital interface, such as HDMI, for example. The decoded broadcast feed may include a first audio track. The first audio track may be a main audio track of the broadcast. The method may further include receiving HbbTV content relating to the broadcast feed. The HbbTV content may be received (e.g., via broadband or any other suitable IP (internet) connection) from an HbbTV server (IP server), for example. The HbbTV content may include a second audio track. The second audio track may be a supplementary audio track that may be used to augment the main audio track or it may be used to temporarily replace the main audio track. The method may further include extracting level-related information from the decoded broadcast feed. The level-related information may be embedded in the decoded broadcast feed. It may enable to obtain an indication of an original audio level (reference audio level) of the first audio track. The method may further include obtaining the indication of the original audio level, for example by deriving the indication from the level-related information or by referring to an external source of information (e.g., IP server) for retrieving the indication using address information (reference information) included in the level-related information. The indication of the original audio level may be provided by the broadcaster or content creator/editor, for example. The method may further include analyzing the first audio track for determining an actual audio level of the first audio track. The method may further include determining a gain factor (e.g., attenuation or enhancement factor) based on the actual audio level and the original audio level. The gain factor may give an indication of an amount of attenuation or enhancement that has been applied to the first audio track by the decoder device. The method may yet further include generating a third audio track for output by the HbbTV terminal device based on the first audio track, the second audio track, and the gain factor. The third audio track may ensure a consistent audio level of its contributions from the first and second audio tracks.
Configured as described above, the proposed method can avoid audio level mismatch between the main audio track and any supplementary audio track, regardless of upstream volume control performed by the decoder device. If the decoder device has performed volume control on the main audio track prior to outputting it to the HbbTV terminal device, the HbbTV terminal device can appropriately adjust the audio level of the supplementary audio track to achieve a consistent listening experience. For example, if the decoder device has increased the volume of the main audio track, the audio level of the supplementary audio track can be enhanced as well, so that the supplementary audio track remains audible over the main audio track, or so that there is no drop in audio volume if the main audio track is temporarily replaced by the supplementary audio track. Likewise, if the decoder device has decreased the volume of the main audio track, the audio level of the supplementary audio track can be decreased as well, so that the supplementary audio track does not drown the main audio track, or so that there is no sudden increase in audio volume if the main audio track is temporarily replaced by the supplementary audio track. Notably, this leveling capability between the main audio track and the supplementary audio track is independent of the type of the decoder device, i.e., can be performed in a decoder-agnostic manner, without giving rise to legacy issues.
In some embodiments, extracting the level-related information from the decoded broadcast feed may involve identifying a digital watermark in the decoded broadcast feed. The digital watermark may be included in (e.g., embedded in, or imprinted on) the first audio track, for example. Alternatively, it may be included in a video component of the decoded broadcast feed for example, noting that the audio and video components of the broadcast feed are synchronized. Said extracting may further involve analyzing the digital watermark for deriving the level-related information.
Conveying the level-related information by means of digital watermarks ensures that the required information is received by the HbbTV terminal device, without requiring a dedicated data exchange or any assistance in general from the decoder device.
In some embodiments, the level-related information may be indicative of the original audio level of the first audio track. Alternatively, the HbbTV content may be received from an HbbTV server, and the level-related information may include reference information (e.g., address information) for obtaining the indication of the original audio level of the first audio track from the HbbTV server. Therein, the HbbTV server may be understood to be any internet-connected server (IP server) that provides HbbTV content to the HbbTV terminal device. The reference information may relate to an address link pointing to a data resource on the HbbTV server, for example.
Thereby, the information required for audio leveling of the main and supplementary audio tracks by the HbbTV terminal device can be transmitted over different channels, depending on specific requirements of the HbbTV use case at hand.
In some embodiments, analyzing the first audio track may involve analyzing audio samples of the first audio track.
In some embodiments, analyzing the first audio track may involve applying a level metering algorithm to the first audio track. It is understood that the level metering algorithm may be a standardized level metering algorithm. In particular, the same level metering algorithm may be used for determining the original audio level (at the broadcaster or content creator/editor side) and for determining the actual audio level at the HbbTV terminal side.
In some embodiments, determining the gain factor may involve comparing the original audio level and the actual audio level to derive the gain factor. For example, the gain factor may be based on a ratio of the original and actual audio levels, or a difference therebetween.
In some embodiments, generating the third audio track may involve adjusting the audio level of the second audio track based on the gain factor. Said generating may further involve mixing the first audio track and the level adjusted second audio track, or temporarily replacing the first audio track by the level adjusted second audio track. For instance, if the decoder device has lowered the audio level of the first audio track before outputting it to the HbbTV terminal device, the audio level of the second audio track may also be lowered before mixing. On the other hand, if the decoder device has raised the audio level of the first audio track before outputting it to the HbbTV terminal device, the audio level of the second audio track may also be raised before mixing.
In some embodiments, extracting the level-related information and analyzing the first audio track may be performed for each of a plurality of consecutive time portions. These time portions (or time windows) may be relatively short. For instance, the time portions may be shorter than 2 s (seconds), such as about 1 s, for example.
Accordingly, audio leveling by the HbbTV terminal device can appropriately react to any volume control operations of the decoder device in real time.
In some embodiments, if the level-related information is extracted from the decoded broadcast feed in a given time portion, the first audio track may be analyzed in the same given time portion.
In some embodiments, the method may further include synchronizing the first and second audio tracks based on respective time stamps imprinted on the broadcast feed (e.g., its audio and/or video component) and the second audio track. This may involve, for example, buffering and/or delaying one of the first and second audio tracks. The time stamps may be embedded as digital watermarks, for example. Synchronization may be understood to be performed before mixing. The time stamps may occur more frequently than, for example, once per 2 s.
In some embodiments, the method may further include decoding the HbbTV content at the HbbTV terminal device. This may be achieved by an appropriately set up decoder or decoder unit of the HbbTV terminal device.
In some embodiments, the decoded broadcast feed may be received from a decoder device coupled to the HbbTV terminal device. Therein, the decoder device may be able to adjust the audio level of the first audio track. That is, the decoder device may be able to perform volume control for the broadcast feed, e.g., for the first audio track, so that the audio level of the first audio track at the input to the HbbTV terminal device may be variable, depending on a volume setting of the decoder device. Said volume control may be performed in response to a (end) user input, for example.
According to another aspect of the disclosure, a HbbTV terminal device is provided. The HbbTV terminal device may include a first interface for receiving a decoded broadcast feed. The decoded broadcast feed may include a first audio track. The HbbTV terminal device may further include a second interface for receiving HbbTV content relating to the broadcast feed. The HbbTV content may include a second audio track. The HbbTV terminal device may further include an extracting unit for extracting level-related information from the decoded broadcast feed. The level-related information may be embedded in the decoded broadcast feed and may enable to obtain an indication of an original audio level (reference audio level) of the first audio track. The extracting unit may be further adapted to obtain (e.g., determine, derive, or retrieve) the indication of the original audio level. The HbbTV terminal device may further include an analyzing unit (e.g., level metering unit) for analyzing the first audio for determining an actual audio level of the first audio track. The HbbTV terminal device may further include a determination unit (e.g., gain determination unit) for determining a gain factor based on the actual audio level and the original audio level. The HbbTV terminal device may yet further include a generating unit (e.g., mixing unit) for generating a third audio track for output by the HbbTV terminal device based on the first audio track, the second audio track, and the gain factor. Any of the aforementioned units or interfaces may by computer-implemented, e.g., may be implemented by one or more processors (computer processors) of the HbbTV terminal device. The apparatus may further include components of a regular TV device, such as speakers and a display, for example.
According to another aspect, a computer program is provided. The computer program may include instructions that, when executed by a processor, cause the processor to carry out all steps of the methods described throughout the disclosure.
According to another aspect, a computer-readable storage medium is provided. The computer-readable storage medium may store the aforementioned computer program.
According to yet another aspect an apparatus including a processor and a memory coupled to the processor is provided. The processor may be adapted to carry out all steps of the methods described throughout the disclosure. The apparatus may further include interfaces as described above, and/or components of a regular TV device, such as speakers and a display, for example.
It will be appreciated that apparatus features and method steps may be interchanged in many ways. In particular, the details of the disclosed method(s) can be realized by the corresponding apparatus, and vice versa, as the skilled person will appreciate. Moreover, any of the above statements made with respect to the method(s) (and, e.g., their steps) are understood to likewise apply to the corresponding apparatus (and, e.g., their blocks, stages, units, etc.), and vice versa.
Example embodiments of the disclosure are explained below with reference to the accompanying drawings, wherein
The Figures (Figs.) and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
In the context of the present disclosure, an HbbTV terminal device (e.g., HbbTV enabled TV) is understood to mean a device that is capable of receiving, in parallel, a (regular) broadcast feed (e.g., either “raw” or decoded) including A/V content and additional content (e.g., A/V content) relating to the broadcast feed via an IP connection (e.g., broadband internet connection) from an IP server (e.g., HbbTV server). Therein, it is understood that in some cases, the broadcast feed may be received via a digital interface. As such, the HbbTV terminal device is understood to correspond to a “hybrid terminal” as defined by the HbbTV standard, i.e., a terminal supporting delivery of A/V content both via broadband (or any suitable IP connection in general) and via broadcast. Therein, broadband is understood to mean an always-on bi-directional IP connection with sufficient bandwidth for streaming or downloading A/V content, and broadcast is understood to mean, for example, a classical uni-directional MPEG-2 transport stream based broadcast such as DVB-T, DVB-S or DVB-C. The broadcast feed may relate both to linear and non-linear A/V content. Linear A/V content is understood to mean broadcast A/V content intended to be viewed in real time by the user, whereas non-linear A/V content is understood to mean A/V content that which does not have to be consumed linearly from beginning to end for example, A/V content streaming on demand.
HbbTV services may be used to send different versions of the audio track or supplementary audio tracks that could be mixed or alternated with the main audio track. When the broadcast TV feed is first processed by a basic STB connected to an HBBTV enabled TV (as an example of an HbbTV terminal device), broadcasters may “watermark” the HbbTV hyperlink into the main A/V content (e.g., the main audio track) so that the HbbTV enabled TV can process the hyperlink and offer the associated supplementary content/services to the viewer.
When the supplementary service consist in or involves delivery of a supplementary audio track to be mixed over the main audio track or to be alternated with the main audio track, it is important that the mix be made on “consistent” audio level. The problem is that the source STB will perform some degree of volume control on the main decoded track and that the HbbTV enabled TV has no information on the audio leveling performed by the STB.
This issue is further illustrated with reference to
Before output to the HbbTV terminal device 10, the decoder device 20 can perform audio level control on the audio component of the decoded broadcast feed (i.e., on the main audio track). The HbbTV terminal device 10 will be unaware of any audio level control (volume control) by the decoder device 20, which may result in inappropriate mixing of the main audio track and the supplementary audio track(s). For instance, the audio level of the main audio track may have been enhanced by the decoder device 20, in which case the supplementary audio track(s) may not be audible over the main audio track, or alternating between the main audio track and the supplementary audio track(s) may result in discernible audio level drops when transitioning from the main audio track to the supplementary audio track(s). Likewise, the audio level of the main audio track may have been reduced by the decoder device 20, in which case the main audio track may not be audible anymore in the presence of the supplementary audio track(s), or alternating between the main audio track and the supplementary audio track(s) may result in discernible audio level boosts when transitioning from the main audio track to the supplementary audio track(s).
An example operation of the decoder device 20 is illustrated in more detail in
Broadly speaking, the present disclosure proposes to address the aforementioned issue of audio level mismatch between the main audio track and the supplementary audio track(s) as follows. The HbbTV terminal device (e.g., HbbTV enabled TV) receiving the main audio decoded and leveled by the decoder device (e.g., source STB) should “measure” the level (audio level) of the received audio using a “standardized” level metering algorithm and compare it against reference level metadata embedded in the feed processed by the HbbTV terminal device either in the supplementary audio track or “watermarked” in the main track.
Based on the level difference, the HbbTV terminal device then can level the supplementary audio track before mixing. Doing so can guarantee a consistent listening experience of the main and supplementary audio services to the viewer/listener.
In other words, the present disclosure proposes to embed the live reference level (reference audio level, original audio level) of the main audio service (i.e., the main track) into the main audio service using (digital) watermarking or delivered as metadata through the supplementary audio services so that the HbbTV terminal device can decode and level a supplementary audio service with respect to the leveling performed by a STB processing the main broadcast audio service.
Therein, the reference level of the main audio track is measured at content creation, preferably over short time windows. Such measurement may be performed for speech or in level gated manner, as described in ITU-R BS.1770, for example. The measured reference audio level is transported to the HbbTV terminal device either watermarked within the main audio track or the video stream of the broadcast feed, or it may be attached as metadata of the supplementary audio track (supplementary audio track frames). In this case, it is understood that the broadcast feed includes reference information that allows to retrieve the metadata from the HbbTV server(s). At the HbbTV terminal device side, the audio level is measured for the same short time windows, using the same algorithm, and compared to the transported reference audio level. If the reference audio level is attached to supplementary audio, the short time windows are synchronized between the HbbTV content and the A/V content of the broadcast feed (i.e., between TV and digital (e.g., HDMI) capture), using timecodes watermarked in the digital (e.g., HDMI) feed received from the decoder device.
An example operation of the HbbTV terminal device 10 is illustrated in more detail in
Concurrently, the audio level of the first audio track is measured by measuring block 160. The measured audio level can then be used, by leveling block 170, for leveling the second audio track such that it has an audio level consistent with the audio level of the first audio track. This may involve applying a gain factor that is determined based on the reference audio level of the first audio track and the measured (i.e., actual) audio level of the first audio track. The leveled second audio track (level adjusted second audio track) is then mixed with the first audio track by mixing block 180. A mixed audio track (third audio track) generated by the mixing block 180 may be output by speaker 190.
Another example operation of the HbbTV terminal device 10 is illustrated in
An example of a corresponding method 400 of audio processing in a HbbTV terminal device (e.g., conforming to the HbbTV standard ETSI TS 102 796 in any of its versions, e.g., any of versions 1.1.1 to 1.5.1, and any upcoming versions) is schematically illustrated in the flowchart of
At step S410, a decoded broadcast feed is received. The decoded broadcast feed includes a first audio track (e.g., the main audio track). This decoded broadcast feed may have been generated by (and received from) a decoder device, such as a STB, for example. It is understood that the decoder device may be coupled to the HbbTV terminal device via a digital interface, such as HDMI, for example, or any other suitable interface. As noted above, such decoder device is typically able to perform volume control and to adjust the audio level of the first audio track. For instance, a user may induce changes of audio level of the first audio track by volume control via the decoder device, instead of by means of volume control via the HbbTV terminal device. Accordingly, the audio level of the first audio track at the input to the HbbTV terminal device can be variable, depending a volume setting of the decoder device. Step S410 may correspond to operation of the aforementioned HDMI Rx block 110, for example.
At step S420, HbbTV content relating to the broadcast feed is received. The HbbTV content comprises a second audio track (e.g., supplementary audio track). This HbbTV content may be received (e.g., via broadband or any suitable IP connection) from an HbbTV server (IP server), for example. For instance, HbbTV content may be requested and retrieved from the HbbTV server using a hyperlink or any other reference that is embedded in the (main) A/V content of the broadcast feed. Such hyperlinks or references may be watermarked into the audio and/or video content at regular time intervals. This step may also include decoding the HbbTV content from whatever format in which it is received or retrieved from the HbbTV server. This may be done by an appropriately set up decoder or decoding unit of the HbbTV terminal device.
As noted above, the second (or supplementary) audio track may be used for mixing with the main audio track (e.g., in the case of an audio commentary) or for temporarily replacing or alternating with the first audio track (e.g., in the case of a targeted advertisement segment).
Step S420 may correspond to operation of aforementioned blocks 120, 130 (or 130′), 140, and 150, for example.
At step S430, level-related information is extracted from the decoded broadcast feed. Therein, the level-related information is embedded in the decoded broadcast feed (e.g., in the audio content and/or the video content). Said extracting may involve identifying a digital watermark in the decoded broadcast feed (e.g., in the first audio track or in the video content of the decoded broadcast feed), and analyzing the digital watermark for deriving the level-related information. The digital watermark may be embedded in or imprinted on the respective component of the decoded broadcast feed. Notably, both the audio component and the video component of the broadcast feed may be used for carrying the watermark, since both components are tightly synchronized with each other.
The level-related information extracted from the decoded broadcast feed enables to obtain an indication of an original audio level (or reference audio level) of the first audio track. Two possible modes of obtaining said indication will now be described in more detail.
According to a first mode, the indication of the original audio level may be derived from the level-related information. That is, the level-related information may include said indication, or in other words, the level-related information itself may be indicative of the original audio level of the first audio track. The indication of the original audio level may be time stamped, in the sense that the level-related information (e.g., the digital watermark) may include both the aforementioned indication and a timestamp.
According to a second mode, the HbbTV terminal device may refer to an external source of information (e.g., the HbbTV server or IP server) for retrieving the indication. In this case, the level-related information comprises reference information (e.g., an address link, hyperlink, or other pointer to a data resource) for obtaining (e.g., requesting, accessing, retrieving) the indication of the original audio level of the first audio track from the external source of information. Similarly to the first mode, also for the second mode the indication of the original audio level retrieved from the external source of information may be timestamped.
For both modes, it is understood that the indication of the original audio level is initially provided by the broadcaster or content creator/editor (along with the time stamps, if applicable).
At step S440, the first audio track is analyzed for determining an actual audio level of the first audio track. Specifically, the analysis may pertain to the samples of the first audio track. One way to perform the analysis is to apply a level metering algorithm to the (samples of the) first audio track. Preferably, this level metering algorithm is a standardized level metering algorithm, at least in the sense that the same level metering algorithm has been used for determining the original audio level (at the broadcaster or content creator/editor side) that can be obtained or derived using the level-related information embedded in the broadcast feed. Thereby, it can be ensured that the original audio level (reference audio level) and the actual audio level are directly comparable to each other without conversion.
It is understood that the extracting of the level-related information at step S430 and the analyzing the first audio track at step S440 can be performed for each of a plurality of consecutive time portions or time windows. Then, if the level-related information is extracted from the decoded broadcast feed in a given time portion, the first audio track should be analyzed in the same given time portion, for deriving the gain factor (see step S450). The time portions in suit may be relatively short and may have a duration that is shorter than what is denoted in the field by “short term,” which typically indicates durations of 2 to 8 seconds. For instance, a suitable length of the time portions may be 1 second.
Step S440 may correspond to operation of the aforementioned measuring block 160, for example.
At step S450, a gain factor is determined based on the actual audio level and the original audio level. This gain factor may be an attenuation or enhancement factor and may give an indication of an amount of attenuation or enhancement that has been applied to the first audio track by the decoder device. The gain factor can be derived by comparing the original audio level and the actual audio level. As such, it may be based, for example, on a ratio or difference between the original audio level and the actual audio level. If the first audio track (e.g., main audio track) is found by the comparison to have been attenuated by the decoder device, also the second audio track (e.g., supplementary audio track) should be attenuated before replacing or mixing with the first audio track, preferably by the same amount. Likewise, if it is found that the first audio track has been enhanced by the decoder device, also the second audio track should be enhanced, preferably by the same amount.
At step S460, a third audio track is generated based on the first audio track, the second audio track, and the gain factor, for output by the HbbTV terminal device. Here, generating the third audio track may involve adjusting the audio level of the second audio track based on the gain factor (i.e., mimicking the audio level adjustment that has been found to have been performed by the decoder device). For instance, if it is found at step S450 that the decoder device has lowered the audio level of the first audio track before outputting it to the HbbTV terminal device, the audio level of the second audio track may also be lowered before generating the third audio track (e.g., by the same amount or a substantially similar amount). On the other hand, if it is found at step S450 that the decoder device has raised the audio level of the first audio track before outputting it to the HbbTV terminal device, the audio level of the second audio track may also be raised before generating the third audio track (e.g., by the same amount or a substantially similar amount). After level adjusting the second audio track, the first audio track and the level adjusted second audio track may be mixed, or the level adjusted second audio track may be used for temporarily replacing the first audio track. As noted above, this may be done for each of a plurality of subsequent time portions, so that any impact of volume control by the decoder device can be appropriately handled.
Adjusting the second audio track at step S460 may correspond to operation of aforementioned leveling block 170, for example. Further, mixing the level adjusted second audio track with the first audio track or temporarily replacing the first audio track by the level adjusted second audio track may correspond to operation of the aforementioned mixing block 180, for example.
As noted above, the audio and video components of the broadcast feed (e.g., the main audio and video tracks) may be tightly synchronized by means of watermarked (i.e., embedded, or imprinted) timestamps. Moreover, also the first and second audio tracks may be synchronized by means of watermarked timestamps. Accordingly, the method 400 may further include (e.g., at any point before mixing at step S460) synchronizing the first and second audio tracks based on time respective stamps imprinted on (a suitable component of) the broadcast feed and the second audio track. This synchronization may involve, for example, buffering and/or delaying one of the first and second audio tracks.
It is to be noted that the audio level comparison does not necessarily have to be instantaneous. This means that a gain factor can be determined based on the actual audio level and the original audio level for a given time portion, but may be used for mixing the first audio track and the level adjusted second audio track, or temporarily replacing the first audio track by the level adjusted second audio track at a later time portion. The shorter the delay the better, but in practice the volume level (audio level) applied on the source STB may be considered as quasi-static, in the sense that it only moves when the end user instructs the STB to perform volume control (e.g., using its remote control), or to mute audio. In this sense, a history of determined gain factors can be resorted to when performing the actual mixing.
In other words, a central aspect of the present disclosure is a comparison between measured loudness at the HbbTV terminal device input and original loudness of the same content segment as it entered the STB. As long as the end user is not changing the volume control at the STB, this difference is constant. So when the HbbTV supplementary stream starts to play (e.g., upon selection by the user, or when entering an advertising segment), there is already a history of measured differences (i.e., gain factors) of the past, and the task is more to catch up for a change when the user operates the volume control of the STB. Without intended limitation, this can be derived for example as a sliding average over a predefined number of past measurement/original loudness comparisons (i.e., sliding average over a predefined number of past gain factors), or the most recently calculated gain factor can be used. In other words, the current gain factor for adjusting the audio level of the second audio track may be determined by calculating a sliding average over a predefined number of previous gain factors, wherein the previous gain factors are determined by comparing the original audio level and the actual audio level of respective previous time portions of the first audio track. In general, the mixing of the first and second audio tracks may be based on a gain factor calculated for the present time portion, or it may be based on one or more gain factors determined for preceding time portions (e.g., immediately preceding time portions).
A method of audio processing in a HbbTV terminal device has been described above. Additionally, the present disclosure also relates to an apparatus (e.g., HbbTV terminal device or audio processing module of a HbbTV terminal device) for carrying out this method. An example of such apparatus is shown in
In general, the present disclosure relates to an apparatus comprising a processor and a memory coupled to the processor, wherein the processor is adapted to carry out the steps of the method(s) described herein. For example, the processor may be adapted to implement the aforementioned interfaces and/or units. The apparatus may further include interfaces as described above, and/or components of a regular TV device, such as speakers and a display, for example.
The present disclosure further relates to a program (e.g., computer program) comprising instructions that, when executed by a processor, cause the processor to carry out some or all of the steps of the methods described herein.
Yet further, the present disclosure relates to a computer-readable (or machine-readable) storage medium storing the aforementioned program. Here, the term “computer-readable storage medium” includes, but is not be limited to, data repositories in the form of solid-state memories, optical media, and magnetic media, for example.
The present disclosure relates to methods of audio processing and apparatus (e.g., HbbTV terminal devices) for audio processing. It is understood that any statements made with regard to the methods and their steps likewise and analogously apply to the corresponding apparatus and their interfaces/blocks/units, and vice versa.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the disclosure discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining”, analyzing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing devices, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
In a similar manner, the term “processor” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A “computer” or a “computing machine” or a “computing platform” (e.g., a HbbTV terminal device) may include one or more processors.
The methodologies described herein are, in one example embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM. A bus subsystem may be included for communicating between the components. The processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, a remote control, and so forth. The processing system may also encompass a storage system such as a disk drive unit. The processing system in some configurations may include a sound output device, and a network interface device. The memory subsystem thus includes a computer-readable carrier medium that carries computer-readable code (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one or more of the methods described herein. Note that when the method includes several elements, e.g., several steps, no ordering of such elements is implied, unless specifically stated. The software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system. Thus, the memory and the processor also constitute computer-readable carrier medium carrying computer-readable code. Furthermore, a computer-readable carrier medium may form, or be included in a computer program product.
In alternative example embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a HbbTV terminal device.
Note that the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
Thus, one example embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement. Thus, as will be appreciated by those skilled in the art, example embodiments of the present disclosure may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method. Accordingly, aspects of the present disclosure may take the form of a method, an entirely hardware example embodiment, an entirely software example embodiment or an example embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.
The software may further be transmitted or received over a network via a network interface device. While the carrier medium is in an example embodiment a single medium, the term “carrier medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “carrier medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present disclosure. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus subsystem. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. For example, the term “carrier medium” shall accordingly be taken to include, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media; a medium bearing a propagated signal detectable by at least one processor or one or more processors and representing a set of instructions that, when executed, implement a method; and a transmission medium in a network bearing a propagated signal detectable by at least one processor of the one or more processors and representing the set of instructions.
It will be understood that the steps of methods discussed are performed in one example embodiment by an appropriate processor (or processors) of a processing (e.g., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the disclosure is not limited to any particular implementation or programming technique and that the disclosure may be implemented using any appropriate techniques for implementing the functionality described herein. The disclosure is not limited to any particular programming language or operating system.
Reference throughout this disclosure to “one example embodiment”, “some example embodiments” or “an example embodiment” means that a particular feature, structure or characteristic described in connection with the example embodiment is included in at least one example embodiment of the present disclosure. Thus, appearances of the phrases “in one example embodiment”, “in some example embodiments” or “in an example embodiment” in various places throughout this disclosure are not necessarily all referring to the same example embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more example embodiments.
As used herein, unless otherwise specified the use of the ordinal adjectives “first”, “second”, “third”, etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
It should be appreciated that in the above description of example embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single example embodiment, Fig., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed example embodiment. Thus, the claims following the Description are hereby expressly incorporated into this Description, with each claim standing on its own as a separate example embodiment of this disclosure.
Furthermore, while some example embodiments described herein include some but not other features included in other example embodiments, combinations of features of different example embodiments are meant to be within the scope of the disclosure, and form different example embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed example embodiments can be used in any combination.
In the description provided herein, numerous specific details are set forth. However, it is understood that example embodiments of the disclosure may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Thus, while there has been described what are believed to be the best modes of the disclosure, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the disclosure, and it is intended to claim all such changes and modifications as fall within the scope of the disclosure. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure.
Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs):
Number | Date | Country | Kind |
---|---|---|---|
21161794.9 | Mar 2021 | EP | regional |
This application claims priority of the following priority applications: U.S. provisional application 63/159,076 (reference: D20131USP1), filed 10 Mar. 2021 and EP application 21161794.9 (reference: D20131EP), filed 10 Mar. 2021, which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/055717 | 3/7/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63159076 | Mar 2021 | US |